Jan 26 18:41:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 18:41:54 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:54 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 18:41:55 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 18:41:55 crc kubenswrapper[4770]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.569765 4770 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578173 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578234 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578247 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578259 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578271 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578282 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578293 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578304 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578314 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578324 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578335 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578346 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578358 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578368 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578377 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578386 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578396 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578406 4770 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578416 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578426 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578436 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578447 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578457 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578468 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578478 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578488 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578498 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578508 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578517 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578528 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578538 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578547 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578557 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578567 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578577 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578599 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578611 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578621 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578632 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578642 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578651 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578661 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578672 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578682 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578727 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578739 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578749 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578759 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578771 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578780 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578790 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578799 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578812 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578826 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578838 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578851 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578863 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578877 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578888 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578900 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578910 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578920 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578964 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578975 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578984 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.578994 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.579004 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.579014 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.579024 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.579034 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.579044 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579259 4770 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579282 4770 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579299 4770 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579312 4770 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579324 4770 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579333 4770 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579345 4770 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579357 4770 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579366 4770 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579377 4770 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579387 4770 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579397 4770 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579407 4770 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579416 4770 flags.go:64] FLAG: --cgroup-root="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579425 4770 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579434 4770 flags.go:64] FLAG: --client-ca-file="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579445 4770 flags.go:64] FLAG: --cloud-config="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579454 4770 flags.go:64] FLAG: --cloud-provider="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579463 4770 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579474 4770 flags.go:64] FLAG: --cluster-domain="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579484 4770 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579494 4770 flags.go:64] FLAG: --config-dir="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579503 4770 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579512 4770 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579534 4770 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579544 4770 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579554 4770 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579563 4770 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579573 4770 flags.go:64] FLAG: --contention-profiling="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579582 4770 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579592 4770 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579602 4770 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579610 4770 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579622 4770 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579632 4770 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579641 4770 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579650 4770 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579659 4770 flags.go:64] FLAG: --enable-server="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579668 4770 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579680 4770 flags.go:64] FLAG: --event-burst="100" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579690 4770 flags.go:64] FLAG: --event-qps="50" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579732 4770 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579742 4770 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579751 4770 flags.go:64] FLAG: --eviction-hard="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579763 4770 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579772 4770 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579781 4770 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579791 4770 flags.go:64] FLAG: --eviction-soft="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579801 4770 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579810 4770 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579819 4770 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579828 4770 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579837 4770 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579846 4770 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579855 4770 flags.go:64] FLAG: --feature-gates="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579866 4770 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579875 4770 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579884 4770 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579893 4770 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579902 4770 flags.go:64] FLAG: --healthz-port="10248" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579912 4770 flags.go:64] FLAG: --help="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579920 4770 flags.go:64] FLAG: --hostname-override="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579929 4770 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579938 4770 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579947 4770 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579957 4770 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579965 4770 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579977 4770 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579988 4770 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.579999 4770 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580010 4770 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580021 4770 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580034 4770 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580048 4770 flags.go:64] FLAG: --kube-reserved="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580059 4770 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580069 4770 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580080 4770 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580091 4770 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580102 4770 flags.go:64] FLAG: --lock-file="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580112 4770 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580124 4770 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580137 4770 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580151 4770 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580161 4770 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580171 4770 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580179 4770 flags.go:64] FLAG: --logging-format="text" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580188 4770 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580198 4770 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580207 4770 flags.go:64] FLAG: --manifest-url="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580215 4770 flags.go:64] FLAG: --manifest-url-header="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580227 4770 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580236 4770 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580247 4770 flags.go:64] FLAG: --max-pods="110" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580257 4770 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580265 4770 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580274 4770 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580284 4770 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580293 4770 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580302 4770 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580311 4770 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580331 4770 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580341 4770 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580350 4770 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580360 4770 flags.go:64] FLAG: --pod-cidr="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580368 4770 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580382 4770 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580391 4770 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580400 4770 flags.go:64] FLAG: --pods-per-core="0" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580409 4770 flags.go:64] FLAG: --port="10250" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580418 4770 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580426 4770 flags.go:64] FLAG: --provider-id="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580435 4770 flags.go:64] FLAG: --qos-reserved="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580444 4770 flags.go:64] FLAG: --read-only-port="10255" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580454 4770 flags.go:64] FLAG: --register-node="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580464 4770 flags.go:64] FLAG: --register-schedulable="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580473 4770 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580488 4770 flags.go:64] FLAG: --registry-burst="10" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580497 4770 flags.go:64] FLAG: --registry-qps="5" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580506 4770 flags.go:64] FLAG: --reserved-cpus="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580516 4770 flags.go:64] FLAG: --reserved-memory="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580527 4770 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580537 4770 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580547 4770 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580556 4770 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580564 4770 flags.go:64] FLAG: --runonce="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580573 4770 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580582 4770 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580593 4770 flags.go:64] FLAG: --seccomp-default="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580601 4770 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580610 4770 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580619 4770 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580628 4770 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580638 4770 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580647 4770 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580656 4770 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580665 4770 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580673 4770 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580683 4770 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580692 4770 flags.go:64] FLAG: --system-cgroups="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580732 4770 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580746 4770 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580755 4770 flags.go:64] FLAG: --tls-cert-file="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580764 4770 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580775 4770 flags.go:64] FLAG: --tls-min-version="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580784 4770 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580793 4770 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580803 4770 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580813 4770 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580825 4770 flags.go:64] FLAG: --v="2" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580839 4770 flags.go:64] FLAG: --version="false" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580853 4770 flags.go:64] FLAG: --vmodule="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580867 4770 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.580880 4770 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581137 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581152 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581163 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581174 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581184 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581194 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581205 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581217 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581228 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581239 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581250 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581261 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581271 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581281 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581291 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581301 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581313 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581323 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581333 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581343 4770 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581354 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581365 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581375 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581385 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581409 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581419 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581429 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581440 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581449 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581460 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581470 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581484 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581498 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581509 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581519 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581530 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581540 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581550 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581562 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581573 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581584 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581597 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581610 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581620 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581631 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581641 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581651 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581661 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581672 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581683 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581693 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581741 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581752 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581762 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581773 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581783 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581799 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581809 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581818 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581828 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581840 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581851 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581862 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581872 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581886 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581899 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581911 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581921 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581932 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581943 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.581954 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.581985 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.593273 4770 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.593319 4770 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593461 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593487 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593498 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593507 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593516 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593525 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593534 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593542 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593552 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593561 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593570 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593579 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593587 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593597 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593605 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593614 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593623 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593631 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593639 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593647 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593655 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593663 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593671 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593679 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593687 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593725 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593737 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593749 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593758 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593766 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593773 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593782 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593790 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593798 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593807 4770 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593815 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593824 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593832 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593840 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593847 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593855 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593862 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593870 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593878 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593885 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593893 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593903 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593913 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593922 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593930 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593938 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593948 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593957 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593965 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593973 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593981 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593990 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.593997 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594005 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594013 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594021 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594028 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594036 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594044 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594052 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594059 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594067 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594077 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594087 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594096 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594106 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.594119 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594368 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594379 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594389 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594398 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594406 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594414 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594425 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594436 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594445 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594453 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594462 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594470 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594479 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594487 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594495 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594503 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594512 4770 feature_gate.go:330] unrecognized feature gate: Example Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594520 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594527 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594535 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594543 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594552 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594562 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594570 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594578 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594586 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594594 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594602 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594609 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594617 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594626 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594633 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594641 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594651 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594660 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594668 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594675 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594683 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594692 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594732 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594743 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594752 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594760 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594769 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594776 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594784 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594792 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594799 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594808 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594816 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594823 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594831 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594841 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594851 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594861 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594871 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594879 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594887 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594895 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594903 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594911 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594919 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594926 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594961 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594970 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594978 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594986 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.594995 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.595004 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.595014 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.595023 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.595036 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.595284 4770 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.599434 4770 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.599559 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.600454 4770 server.go:997] "Starting client certificate rotation" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.600490 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.601035 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 14:47:38.240693858 +0000 UTC Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.601217 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.608509 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.609797 4770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.611899 4770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.622343 4770 log.go:25] "Validated CRI v1 runtime API" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.638146 4770 log.go:25] "Validated CRI v1 image API" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.640214 4770 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.642393 4770 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-18-37-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.642441 4770 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.668432 4770 manager.go:217] Machine: {Timestamp:2026-01-26 18:41:55.666318169 +0000 UTC m=+0.231224981 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:72c9bf02-a067-4dd0-b297-10816a0f4fa6 BootID:e92cb904-8251-4c58-a8df-ec04634af33f Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:61:11:93 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:61:11:93 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:44:5e:69 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c4:03:2a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:a7:88 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:c8:b1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:b7:43:13:72:67 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:49:ba:49:3b:a6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.668881 4770 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.669100 4770 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.670221 4770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.670605 4770 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.670680 4770 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.671134 4770 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.671162 4770 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.671584 4770 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.671643 4770 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.671998 4770 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.672169 4770 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.673309 4770 kubelet.go:418] "Attempting to sync node with API server" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.673349 4770 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.673397 4770 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.673424 4770 kubelet.go:324] "Adding apiserver pod source" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.673450 4770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.676404 4770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.676415 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.676418 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.676560 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.676569 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.677226 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.678318 4770 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679312 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679367 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679388 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679408 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679436 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679454 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679472 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679500 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679522 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679541 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679563 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.679580 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.680369 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.681214 4770 server.go:1280] "Started kubelet" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.681740 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.682089 4770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.682086 4770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.683249 4770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 18:41:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.685870 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e5c0d45f498df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:41:55.681171679 +0000 UTC m=+0.246078441,LastTimestamp:2026-01-26 18:41:55.681171679 +0000 UTC m=+0.246078441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.689060 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.689431 4770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.689575 4770 server.go:460] "Adding debug handlers to kubelet server" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.689688 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:17:18.460071989 +0000 UTC Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.694852 4770 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.694897 4770 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.695041 4770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.695168 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.695927 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.695924 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.696024 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.702484 4770 factory.go:55] Registering systemd factory Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.702528 4770 factory.go:221] Registration of the systemd container factory successfully Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.703167 4770 factory.go:153] Registering CRI-O factory Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.703194 4770 factory.go:221] Registration of the crio container factory successfully Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.703258 4770 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.703278 4770 factory.go:103] Registering Raw factory Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.703292 4770 manager.go:1196] Started watching for new ooms in manager Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.704021 4770 manager.go:319] Starting recovery of all containers Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.707759 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708482 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708567 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708641 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708673 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708754 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708791 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708871 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708946 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.708976 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709045 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709077 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709147 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709184 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709261 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709294 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709377 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709455 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709487 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709560 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709590 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709653 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709686 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709770 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709797 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709893 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709966 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.709997 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710063 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710092 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710165 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710197 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710273 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710301 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710371 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710398 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710471 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710499 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710574 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710646 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710677 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.710959 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711072 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711107 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711184 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711215 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711288 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711320 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.711400 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.714392 4770 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.714575 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.714753 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.714876 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715057 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715207 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715349 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715485 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715602 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715762 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.715888 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716036 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716161 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716280 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716400 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716515 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716638 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716769 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.716888 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717008 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717110 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717240 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717355 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717463 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717586 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717759 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.717897 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718039 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718160 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718282 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718405 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718537 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718654 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718786 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.718900 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719031 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719150 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719264 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719365 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719476 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719584 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.719757 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720180 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720237 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720262 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720277 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720294 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720324 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720339 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720358 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720374 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720393 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720420 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720441 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720465 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720485 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720526 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720552 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720582 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720611 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720641 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720662 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720688 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720740 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720806 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720832 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720847 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720860 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720870 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720880 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720892 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720901 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720912 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720925 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720935 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720946 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.720991 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721004 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721018 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721028 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721041 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721050 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721060 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721073 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721083 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721094 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721104 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721115 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721132 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721145 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721162 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721174 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721185 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721196 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721207 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721227 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721239 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721251 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721267 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721280 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721297 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721309 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721321 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721335 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721343 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721353 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721431 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721443 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721456 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721465 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721474 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721486 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721496 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721511 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721523 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721534 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721548 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721559 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721596 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721609 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721621 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721638 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721649 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721668 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721680 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721690 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721723 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721822 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721843 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721858 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721871 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721890 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.721900 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722290 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722309 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722323 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722342 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722362 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722393 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722405 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722416 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722431 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722442 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722457 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722468 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722479 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722493 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722505 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722520 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722530 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722542 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722559 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722571 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722583 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722593 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722603 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722616 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722628 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722642 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722653 4770 reconstruct.go:97] "Volume reconstruction finished" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.722661 4770 reconciler.go:26] "Reconciler: start to sync state" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.731039 4770 manager.go:324] Recovery completed Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.741759 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.743198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.743232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.743243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.744849 4770 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.744870 4770 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.744889 4770 state_mem.go:36] "Initialized new in-memory state store" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.764275 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.765770 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.765820 4770 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.765852 4770 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.765931 4770 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 18:41:55 crc kubenswrapper[4770]: W0126 18:41:55.766420 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.766482 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.768833 4770 policy_none.go:49] "None policy: Start" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.769950 4770 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.769982 4770 state_mem.go:35] "Initializing new in-memory state store" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.796616 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.819869 4770 manager.go:334] "Starting Device Plugin manager" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.819920 4770 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.819934 4770 server.go:79] "Starting device plugin registration server" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.820355 4770 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.820373 4770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.820547 4770 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.820755 4770 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.820773 4770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.826903 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.866439 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.866511 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867600 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867939 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.867973 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.868983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.869143 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.869221 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.869254 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.870778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.870812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.870939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.870968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.870888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.871008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.871164 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.871185 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.871213 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.872224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.872246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.872257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874269 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874351 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.874391 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875231 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.875254 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.876873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.876896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.876908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.896654 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.921069 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.922340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.922374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.922383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.922406 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:41:55 crc kubenswrapper[4770]: E0126 18:41:55.922843 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926195 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926279 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926312 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926344 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926374 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926732 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926801 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:55 crc kubenswrapper[4770]: I0126 18:41:55.926835 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028299 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028377 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028410 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028442 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028502 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028531 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028564 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028621 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028647 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028642 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028881 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.028986 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029037 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029002 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029063 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029098 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029146 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029191 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029223 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029287 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029300 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.029407 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.124032 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.125474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.125564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.125584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.125621 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:41:56 crc kubenswrapper[4770]: E0126 18:41:56.126536 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.205356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.228356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: W0126 18:41:56.235378 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3d3779ade6aae955d833660389e43e89592d839492deb9069278606ce62a26c2 WatchSource:0}: Error finding container 3d3779ade6aae955d833660389e43e89592d839492deb9069278606ce62a26c2: Status 404 returned error can't find the container with id 3d3779ade6aae955d833660389e43e89592d839492deb9069278606ce62a26c2 Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.236793 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.252240 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.259798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:56 crc kubenswrapper[4770]: W0126 18:41:56.286253 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3dc71300dcde0a3e5053fa286827822ac062d54ed06f4250c0e4db2869fe9e7c WatchSource:0}: Error finding container 3dc71300dcde0a3e5053fa286827822ac062d54ed06f4250c0e4db2869fe9e7c: Status 404 returned error can't find the container with id 3dc71300dcde0a3e5053fa286827822ac062d54ed06f4250c0e4db2869fe9e7c Jan 26 18:41:56 crc kubenswrapper[4770]: W0126 18:41:56.289356 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-afe4003ccede2a913fca6ab61bbb9265bf74b826b086cdc9d17c4a01923507f1 WatchSource:0}: Error finding container afe4003ccede2a913fca6ab61bbb9265bf74b826b086cdc9d17c4a01923507f1: Status 404 returned error can't find the container with id afe4003ccede2a913fca6ab61bbb9265bf74b826b086cdc9d17c4a01923507f1 Jan 26 18:41:56 crc kubenswrapper[4770]: E0126 18:41:56.297819 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Jan 26 18:41:56 crc kubenswrapper[4770]: W0126 18:41:56.524034 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:56 crc kubenswrapper[4770]: E0126 18:41:56.524124 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.527609 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.529732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.529775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.529787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.529816 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:41:56 crc kubenswrapper[4770]: E0126 18:41:56.530325 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.682890 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.689929 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:03:02.968922478 +0000 UTC Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.773983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.774251 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3dc71300dcde0a3e5053fa286827822ac062d54ed06f4250c0e4db2869fe9e7c"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.774406 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.782442 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.782539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"afe4003ccede2a913fca6ab61bbb9265bf74b826b086cdc9d17c4a01923507f1"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.782786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.782846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.782873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.786625 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.786684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1709937af0f80e3a6b277d1ac61814b3523fcff37e772cf17d6d3b323a3413d"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.787233 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.788668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.788758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.788777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.790901 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.790947 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c1bcb91d89fdc44a2a5ac7447fefe0063b3efdd87d12481a9e392a2dd130646a"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.791050 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.792185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.792258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.792277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.798863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.798943 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3d3779ade6aae955d833660389e43e89592d839492deb9069278606ce62a26c2"} Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.799043 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.799925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.799984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:56 crc kubenswrapper[4770]: I0126 18:41:56.800021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:56 crc kubenswrapper[4770]: W0126 18:41:56.872443 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:56 crc kubenswrapper[4770]: E0126 18:41:56.872561 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.098911 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Jan 26 18:41:57 crc kubenswrapper[4770]: W0126 18:41:57.209116 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.209200 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:57 crc kubenswrapper[4770]: W0126 18:41:57.255169 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.255894 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.331097 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.333021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.333059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.333068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.333093 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.333495 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.682609 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.690307 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:34:34.574697659 +0000 UTC Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.714745 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.715557 4770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:57 crc kubenswrapper[4770]: E0126 18:41:57.790890 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e5c0d45f498df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:41:55.681171679 +0000 UTC m=+0.246078441,LastTimestamp:2026-01-26 18:41:55.681171679 +0000 UTC m=+0.246078441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.802216 4770 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3" exitCode=0 Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.802322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.802510 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.803650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.803682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.803732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.804310 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5" exitCode=0 Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.804424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.804601 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.805855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.805908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.805924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.807763 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.808943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.808979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.808995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.809511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.809684 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.809730 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.809833 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.810789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.810839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.810862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.811248 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7" exitCode=0 Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.811354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.811551 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.814796 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.814845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.814867 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.815894 4770 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a" exitCode=0 Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.815937 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.815972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5"} Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.816077 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.817094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.817120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:57 crc kubenswrapper[4770]: I0126 18:41:57.817133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:58 crc kubenswrapper[4770]: W0126 18:41:58.234840 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:58 crc kubenswrapper[4770]: E0126 18:41:58.234932 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:58 crc kubenswrapper[4770]: W0126 18:41:58.482956 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Jan 26 18:41:58 crc kubenswrapper[4770]: E0126 18:41:58.483066 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.691234 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:20:12.394087512 +0000 UTC Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.819006 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35" exitCode=0 Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.819099 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.819290 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.820128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.820181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.820199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.820294 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.820344 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.823287 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.823327 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.823338 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846"} Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.823349 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.824094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.824138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.824156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.934352 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.935466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.935513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.935525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:58 crc kubenswrapper[4770]: I0126 18:41:58.935554 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.691880 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:59:07.429070381 +0000 UTC Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.827800 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a" exitCode=0 Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.827861 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a"} Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.827968 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.828616 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.828654 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.828669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.831074 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49"} Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.831197 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.835349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.835403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.835426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.839391 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5"} Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.839443 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed"} Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.839527 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.840719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.840739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.840747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.860142 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.860353 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.861517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.861547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.861556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:41:59 crc kubenswrapper[4770]: I0126 18:41:59.870226 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.692429 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:31:15.797069687 +0000 UTC Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.798935 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452"} Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847858 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847865 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf"} Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847897 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e"} Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847909 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847934 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.847964 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.848048 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.848100 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.849630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.849748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.849776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.850770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.850806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.850822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.851552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.851590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:00 crc kubenswrapper[4770]: I0126 18:42:00.851608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.209658 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.364680 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.693282 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:04:24.366703375 +0000 UTC Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.797939 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9"} Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857143 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c"} Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857203 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857204 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857322 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.857303 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.860933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.862996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.863086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:01 crc kubenswrapper[4770]: I0126 18:42:01.863111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.693646 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:37:12.290767484 +0000 UTC Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.746472 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.859922 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.859956 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.859999 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:02 crc kubenswrapper[4770]: I0126 18:42:02.861653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:03 crc kubenswrapper[4770]: I0126 18:42:03.694440 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:08:19.489823117 +0000 UTC Jan 26 18:42:03 crc kubenswrapper[4770]: I0126 18:42:03.861833 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:03 crc kubenswrapper[4770]: I0126 18:42:03.862986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:03 crc kubenswrapper[4770]: I0126 18:42:03.863042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:03 crc kubenswrapper[4770]: I0126 18:42:03.863059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:04 crc kubenswrapper[4770]: I0126 18:42:04.695484 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:26:39.20532754 +0000 UTC Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.258787 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.259021 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.261564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.261630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.261652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.593128 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.593670 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.595550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.595762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.595911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:05 crc kubenswrapper[4770]: I0126 18:42:05.695927 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:01:18.304003325 +0000 UTC Jan 26 18:42:05 crc kubenswrapper[4770]: E0126 18:42:05.827189 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.696976 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:56:59.373099056 +0000 UTC Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.936112 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.936283 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.937579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.937619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.937634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:06 crc kubenswrapper[4770]: I0126 18:42:06.941377 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:07 crc kubenswrapper[4770]: I0126 18:42:07.697644 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:31:57.219606802 +0000 UTC Jan 26 18:42:07 crc kubenswrapper[4770]: I0126 18:42:07.871988 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:07 crc kubenswrapper[4770]: I0126 18:42:07.877252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:07 crc kubenswrapper[4770]: I0126 18:42:07.877305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:07 crc kubenswrapper[4770]: I0126 18:42:07.877318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:08 crc kubenswrapper[4770]: I0126 18:42:08.683563 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 18:42:08 crc kubenswrapper[4770]: I0126 18:42:08.699022 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:23:12.472888862 +0000 UTC Jan 26 18:42:08 crc kubenswrapper[4770]: E0126 18:42:08.700502 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 18:42:08 crc kubenswrapper[4770]: E0126 18:42:08.936502 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 18:42:09 crc kubenswrapper[4770]: W0126 18:42:09.378251 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 18:42:09 crc kubenswrapper[4770]: I0126 18:42:09.378354 4770 trace.go:236] Trace[28334828]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:41:59.376) (total time: 10002ms): Jan 26 18:42:09 crc kubenswrapper[4770]: Trace[28334828]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:42:09.378) Jan 26 18:42:09 crc kubenswrapper[4770]: Trace[28334828]: [10.00210867s] [10.00210867s] END Jan 26 18:42:09 crc kubenswrapper[4770]: E0126 18:42:09.378379 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 18:42:09 crc kubenswrapper[4770]: I0126 18:42:09.699923 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:24:27.51101563 +0000 UTC Jan 26 18:42:09 crc kubenswrapper[4770]: I0126 18:42:09.937074 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 18:42:09 crc kubenswrapper[4770]: I0126 18:42:09.937199 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 18:42:09 crc kubenswrapper[4770]: I0126 18:42:09.999763 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 18:42:10 crc kubenswrapper[4770]: I0126 18:42:09.999872 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 18:42:10 crc kubenswrapper[4770]: I0126 18:42:10.005687 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 18:42:10 crc kubenswrapper[4770]: I0126 18:42:10.005776 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 18:42:10 crc kubenswrapper[4770]: I0126 18:42:10.700291 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:22:45.334887477 +0000 UTC Jan 26 18:42:11 crc kubenswrapper[4770]: I0126 18:42:11.216460 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]log ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]etcd ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-apiextensions-informers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/crd-informer-synced ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/bootstrap-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-registration-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]autoregister-completion ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 18:42:11 crc kubenswrapper[4770]: livez check failed Jan 26 18:42:11 crc kubenswrapper[4770]: I0126 18:42:11.216551 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:42:11 crc kubenswrapper[4770]: I0126 18:42:11.701447 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:53:39.538883355 +0000 UTC Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.136896 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.139062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.139133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.139161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.139207 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:42:12 crc kubenswrapper[4770]: E0126 18:42:12.147042 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 18:42:12 crc kubenswrapper[4770]: I0126 18:42:12.702178 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:00:08.344664509 +0000 UTC Jan 26 18:42:13 crc kubenswrapper[4770]: I0126 18:42:13.702779 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:46:02.062284642 +0000 UTC Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.525147 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.703766 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:26:49.055083356 +0000 UTC Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988578 4770 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988657 4770 trace.go:236] Trace[1089915769]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:42:02.161) (total time: 12827ms): Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[1089915769]: ---"Objects listed" error: 12827ms (18:42:14.988) Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[1089915769]: [12.827274542s] [12.827274542s] END Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988739 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988821 4770 trace.go:236] Trace[1405293078]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:42:00.433) (total time: 14555ms): Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[1405293078]: ---"Objects listed" error: 14555ms (18:42:14.988) Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[1405293078]: [14.555140117s] [14.555140117s] END Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988848 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988666 4770 trace.go:236] Trace[973857057]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 18:42:04.606) (total time: 10382ms): Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[973857057]: ---"Objects listed" error: 10382ms (18:42:14.988) Jan 26 18:42:14 crc kubenswrapper[4770]: Trace[973857057]: [10.382323588s] [10.382323588s] END Jan 26 18:42:14 crc kubenswrapper[4770]: I0126 18:42:14.988934 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.011174 4770 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.300399 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.321266 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.373450 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40112->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.373503 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40112->192.168.126.11:17697: read: connection reset by peer" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.593886 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.594375 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.686162 4770 apiserver.go:52] "Watching apiserver" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.688647 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.688856 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689155 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689205 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.689355 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689419 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.689618 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689736 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.689948 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.689983 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.690459 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.692764 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.692924 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.693212 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.693542 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.693720 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.693895 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.694051 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.694403 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.695625 4770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.704821 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:36:12.449771924 +0000 UTC Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.714631 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.725103 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.734091 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.743796 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.758035 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.787189 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794540 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794628 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794681 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794797 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794858 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794888 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794927 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794958 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.794991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795082 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795115 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795145 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795176 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795288 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795320 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795394 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795425 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795463 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795508 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795572 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795612 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795644 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795747 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795780 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795816 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795866 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795897 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795930 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.795990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796083 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796116 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796155 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796190 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796234 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796298 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796333 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796367 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796404 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796444 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796479 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796515 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796553 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796591 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796623 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796654 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796686 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796752 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796786 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796822 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796855 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796894 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796928 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796959 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.796999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797030 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797072 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797140 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797173 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797272 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797312 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797344 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797400 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797441 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797474 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797507 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797577 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797621 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797689 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797750 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797818 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797850 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797882 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797917 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797950 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.797988 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798143 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798188 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798254 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798291 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798396 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798464 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798539 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798573 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798606 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798639 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798770 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798839 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798878 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798911 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798958 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.798999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799034 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799066 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799104 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799139 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799174 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799306 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799343 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799376 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799441 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799518 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799595 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799631 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799668 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799731 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799767 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799832 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799953 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.799988 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800021 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800058 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800095 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800175 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800213 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800253 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800646 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800686 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800744 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800779 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.800932 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801003 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801184 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801206 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801234 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801261 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801277 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801280 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801422 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801442 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801456 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801438 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.801508 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802044 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802177 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802230 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802243 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802356 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802397 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802413 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802473 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802549 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802679 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802838 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802852 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802887 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.802984 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803129 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803373 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803467 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803849 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803883 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803965 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.803943 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.804124 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.804819 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.805122 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.805386 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.805846 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.805964 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806158 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806234 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806318 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806359 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.806395 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:16.30636635 +0000 UTC m=+20.871273142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806458 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806547 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806880 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.806980 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.807362 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.807966 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.808087 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.808442 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.808501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.808730 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809042 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809076 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809260 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809328 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809651 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809729 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.809866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.810007 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.810071 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.810091 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.810161 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.810233 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.811530 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.811936 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.812443 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.812580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.812644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813093 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813488 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813532 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813664 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813692 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813763 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813792 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813818 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813844 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813891 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813928 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813945 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813962 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813980 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.813998 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814018 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814059 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814071 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814081 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814135 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814180 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814200 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814206 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814222 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.814485 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.818377 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.824017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.824496 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.824760 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825020 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825099 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825163 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825226 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825241 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825294 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825321 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.825322 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.818942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.818820 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.826163 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.819240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.819260 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.826607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.826652 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.820364 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.826694 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.826944 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.827115 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.827381 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.827661 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.827970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828036 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828097 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828264 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828485 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828612 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828875 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828956 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.828959 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829096 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829320 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829349 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829632 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.829671 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829929 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.830005 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.829693 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.830533 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.830866 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.830960 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.831183 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.831196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.831483 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.831477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.831570 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:16.331550247 +0000 UTC m=+20.896456979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.831831 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832106 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832142 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832380 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832393 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832433 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832634 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832645 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.832898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.833403 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.833546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.834693 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.835257 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.835323 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.835553 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.835683 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.836163 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.836179 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.834895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837484 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837514 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837521 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837618 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837631 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837856 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837869 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837958 4770 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837967 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.837993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.838413 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.838446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.838484 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.838895 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839350 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839395 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839423 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839596 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839608 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839883 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.839757 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840011 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840024 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840036 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840058 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840080 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840105 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840131 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840153 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840175 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840256 4770 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.836270 4770 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840301 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840306 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840493 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840747 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840749 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.841225 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.841250 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.841273 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842037 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842177 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842218 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.842428 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.840281 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.842494 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:16.342471881 +0000 UTC m=+20.907378643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842518 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842548 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842568 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842586 4770 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842653 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842880 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842902 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842919 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842936 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842954 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842971 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.842986 4770 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843002 4770 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843018 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843036 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843092 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843110 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843126 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843143 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843162 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843181 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843198 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843216 4770 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843234 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843250 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843271 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843288 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843304 4770 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843320 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843339 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843356 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843372 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843396 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843413 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843429 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843446 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843462 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844114 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844142 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844159 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844176 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844193 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844209 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844224 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844241 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844259 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844275 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844292 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844309 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844326 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844346 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844362 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844377 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844393 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844411 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844427 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844443 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844465 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844483 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844499 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844518 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844535 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844787 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844807 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844825 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844871 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844887 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844902 4770 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844918 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844935 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844951 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844966 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844982 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845000 4770 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845016 4770 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845032 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845047 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845061 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845076 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845092 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845108 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845123 4770 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845140 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845155 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845171 4770 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845188 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845206 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845222 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845239 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845259 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845275 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845290 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845306 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845322 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845339 4770 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845357 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845372 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845388 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845407 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845423 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845441 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845459 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845476 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845493 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845512 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845529 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845546 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843034 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843369 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.843616 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844012 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.844465 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845787 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845814 4770 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845833 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845848 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845865 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845882 4770 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845148 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845233 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845548 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.845647 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.846020 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.846170 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.846607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.847197 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.847493 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.847968 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.848375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853097 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853120 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853135 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853206 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:16.35318472 +0000 UTC m=+20.918091452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853224 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853249 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853262 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.853314 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:16.353296513 +0000 UTC m=+20.918203245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.853512 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.853412 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.853787 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.853953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.854254 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.854674 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.854911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.855216 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.855232 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.855264 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.855317 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.856317 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.856651 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.859464 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.861800 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.862057 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.865256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.874948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.877483 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.885922 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.894230 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.894851 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.896563 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5" exitCode=255 Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.896860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5"} Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.902848 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.903284 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.903372 4770 scope.go:117] "RemoveContainer" containerID="a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5" Jan 26 18:42:15 crc kubenswrapper[4770]: E0126 18:42:15.903469 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.913167 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.924095 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.943004 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.946956 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947054 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947070 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947086 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947099 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947133 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947113 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947183 4770 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947193 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947224 4770 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947235 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947245 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947257 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947267 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947277 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947286 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947296 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947304 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947312 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947322 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947330 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947338 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947346 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947356 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947364 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947372 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947380 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947388 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947396 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947404 4770 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947413 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947423 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947432 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947440 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947450 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947458 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947466 4770 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947474 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947482 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947490 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947498 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947506 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947514 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947522 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947530 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947540 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947547 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947556 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947564 4770 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947573 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947581 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947590 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947597 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947605 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947613 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947621 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947629 4770 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947637 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.947645 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.952793 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.967461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.976455 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.986944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:15 crc kubenswrapper[4770]: I0126 18:42:15.995046 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.002886 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.004941 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.010190 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.012592 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.021973 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.031490 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.052882 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.054668 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.057785 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.057799 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.063240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.063713 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.063770 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.081154 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.082047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.087975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.091369 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150365 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150413 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150430 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150446 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150459 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150472 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150485 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150498 4770 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150510 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.150523 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.197060 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.205378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.223254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.240223 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.249901 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.251355 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.251498 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.263679 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.281552 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.291931 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.306030 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.315203 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.316266 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.324757 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.353492 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:17.35346049 +0000 UTC m=+21.918367222 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.362181 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.362268 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.362304 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.362321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.362343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362413 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362460 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:17.362446283 +0000 UTC m=+21.927353015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362787 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362854 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362869 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362873 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:17.362854924 +0000 UTC m=+21.927761656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362878 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362920 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362924 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:17.362912755 +0000 UTC m=+21.927819607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362930 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362940 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:16 crc kubenswrapper[4770]: E0126 18:42:16.362964 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:17.362957166 +0000 UTC m=+21.927863898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.370262 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.387156 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.398076 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.409886 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.424549 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.439347 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.454280 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.471872 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.498929 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.633381 4770 csr.go:261] certificate signing request csr-w5csf is approved, waiting to be issued Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.648477 4770 csr.go:257] certificate signing request csr-w5csf is issued Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.705985 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:05:25.484449332 +0000 UTC Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.900322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.900367 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4549968a50ba9ad43d0febbda3b170658db2dd02ef01d142d77da8724012e680"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.902031 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.903404 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.903614 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.904117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6ef705c909f7598b5e66b14ea631d802a014744de9fda7df5c6012d9da4b06de"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.905387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.905426 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.905442 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"db6c5f0e1b87240748303a0352b3702813d42b18c1a0c7a4e88f0fb0b842070f"} Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.915922 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.937330 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.939598 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.945115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.952157 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.963664 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.977896 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.989066 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nnf7c"] Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.989299 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.989514 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.991403 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nf9ww"] Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.991898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.992933 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-f87gd"] Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.993117 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f87gd" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.993594 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kk5wm"] Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.993858 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.994273 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.994443 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.994628 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.995018 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.995188 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.995317 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996024 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996205 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996498 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996532 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996718 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996845 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.996504 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.997016 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 18:42:16 crc kubenswrapper[4770]: I0126 18:42:16.997119 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.002097 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.021231 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.048828 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066825 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6109a686-3ab2-465e-8a96-354f2ecbf491-proxy-tls\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066895 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-system-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-hostroot\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066937 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-bin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066953 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-cni-binary-copy\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.066983 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-socket-dir-parent\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067000 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97klc\" (UniqueName: \"kubernetes.io/projected/21c84bb4-c720-4d18-bb93-908501f2f39e-kube-api-access-97klc\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-multus\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067029 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-kubelet\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067042 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-conf-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067056 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-etc-kubernetes\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067072 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-k8s-cni-cncf-io\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6109a686-3ab2-465e-8a96-354f2ecbf491-rootfs\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067127 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cnibin\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067143 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-daemon-config\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067158 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgvlm\" (UniqueName: \"kubernetes.io/projected/cf1d4063-db34-411a-bdbc-3736acf7f126-kube-api-access-rgvlm\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-cnibin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067195 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6109a686-3ab2-465e-8a96-354f2ecbf491-mcd-auth-proxy-config\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmkx\" (UniqueName: \"kubernetes.io/projected/6109a686-3ab2-465e-8a96-354f2ecbf491-kube-api-access-cpmkx\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067230 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-os-release\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-netns\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067256 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-multus-certs\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-os-release\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067282 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-system-cni-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067297 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067310 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lng8h\" (UniqueName: \"kubernetes.io/projected/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-kube-api-access-lng8h\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/21c84bb4-c720-4d18-bb93-908501f2f39e-hosts-file\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.067339 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-binary-copy\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.075353 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.088718 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.101252 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.115655 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.144817 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.153931 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.164228 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167816 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-hostroot\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167854 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-bin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167887 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-socket-dir-parent\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-hostroot\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167949 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-bin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.167983 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-socket-dir-parent\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97klc\" (UniqueName: \"kubernetes.io/projected/21c84bb4-c720-4d18-bb93-908501f2f39e-kube-api-access-97klc\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-cni-binary-copy\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-kubelet\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-conf-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-etc-kubernetes\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168137 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-k8s-cni-cncf-io\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168147 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-conf-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168155 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-etc-kubernetes\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168137 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-kubelet\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-k8s-cni-cncf-io\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168154 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-multus\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168297 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-var-lib-cni-multus\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6109a686-3ab2-465e-8a96-354f2ecbf491-rootfs\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168366 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-daemon-config\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgvlm\" (UniqueName: \"kubernetes.io/projected/cf1d4063-db34-411a-bdbc-3736acf7f126-kube-api-access-rgvlm\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cnibin\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-cnibin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168441 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpmkx\" (UniqueName: \"kubernetes.io/projected/6109a686-3ab2-465e-8a96-354f2ecbf491-kube-api-access-cpmkx\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168445 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6109a686-3ab2-465e-8a96-354f2ecbf491-rootfs\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168461 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-os-release\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-cnibin\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6109a686-3ab2-465e-8a96-354f2ecbf491-mcd-auth-proxy-config\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-multus-certs\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168556 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-os-release\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-netns\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168590 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-multus-certs\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168594 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-system-cni-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168461 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cnibin\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168616 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168630 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-system-cni-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/21c84bb4-c720-4d18-bb93-908501f2f39e-hosts-file\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-binary-copy\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-os-release\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168681 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lng8h\" (UniqueName: \"kubernetes.io/projected/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-kube-api-access-lng8h\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168718 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-os-release\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168721 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6109a686-3ab2-465e-8a96-354f2ecbf491-proxy-tls\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168759 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/21c84bb4-c720-4d18-bb93-908501f2f39e-hosts-file\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-system-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-host-run-netns\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168909 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cf1d4063-db34-411a-bdbc-3736acf7f126-system-cni-dir\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.168971 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-multus-daemon-config\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.169231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.169294 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-cni-binary-copy\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.169292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6109a686-3ab2-465e-8a96-354f2ecbf491-mcd-auth-proxy-config\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.169435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cf1d4063-db34-411a-bdbc-3736acf7f126-cni-binary-copy\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.173892 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6109a686-3ab2-465e-8a96-354f2ecbf491-proxy-tls\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.182197 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.185656 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97klc\" (UniqueName: \"kubernetes.io/projected/21c84bb4-c720-4d18-bb93-908501f2f39e-kube-api-access-97klc\") pod \"node-resolver-kk5wm\" (UID: \"21c84bb4-c720-4d18-bb93-908501f2f39e\") " pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.190216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lng8h\" (UniqueName: \"kubernetes.io/projected/3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e-kube-api-access-lng8h\") pod \"multus-additional-cni-plugins-nf9ww\" (UID: \"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\") " pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.198333 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpmkx\" (UniqueName: \"kubernetes.io/projected/6109a686-3ab2-465e-8a96-354f2ecbf491-kube-api-access-cpmkx\") pod \"machine-config-daemon-nnf7c\" (UID: \"6109a686-3ab2-465e-8a96-354f2ecbf491\") " pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.198382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgvlm\" (UniqueName: \"kubernetes.io/projected/cf1d4063-db34-411a-bdbc-3736acf7f126-kube-api-access-rgvlm\") pod \"multus-f87gd\" (UID: \"cf1d4063-db34-411a-bdbc-3736acf7f126\") " pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.219131 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.239751 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.279238 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.302175 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.309164 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.311280 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: W0126 18:42:17.311784 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6109a686_3ab2_465e_8a96_354f2ecbf491.slice/crio-6ecff52813a64f3883b2e9b35fcfd308e1a31f5c18acbc1d4402cda9249f1ca4 WatchSource:0}: Error finding container 6ecff52813a64f3883b2e9b35fcfd308e1a31f5c18acbc1d4402cda9249f1ca4: Status 404 returned error can't find the container with id 6ecff52813a64f3883b2e9b35fcfd308e1a31f5c18acbc1d4402cda9249f1ca4 Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.321510 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-f87gd" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.326420 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kk5wm" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.336498 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.372935 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.373005 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.373032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.373051 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373117 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:19.373091045 +0000 UTC m=+23.937997777 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373157 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373172 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.373174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373177 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373220 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373228 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373236 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373251 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:19.373244478 +0000 UTC m=+23.938151210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373182 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373277 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:19.373264659 +0000 UTC m=+23.938171391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373207 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373291 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:19.373284639 +0000 UTC m=+23.938191371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.373304 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:19.3732988 +0000 UTC m=+23.938205532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.376435 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lgvzv"] Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.377178 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.380723 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.380831 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.380845 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.380900 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.381020 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.381083 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.381220 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.401145 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.432458 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.444730 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.459896 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.474997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475048 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475061 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475077 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475143 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475260 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475298 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475335 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475386 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475420 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8r7\" (UniqueName: \"kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475433 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475446 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475512 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475573 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.475589 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.477632 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.490035 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.517800 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.542111 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.567675 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.576628 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.576742 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577315 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.576690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577408 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577445 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577515 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577556 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8r7\" (UniqueName: \"kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577600 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577683 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577750 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577775 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577811 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577883 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577919 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577950 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.577980 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.578364 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579131 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579183 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579181 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579232 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579238 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579255 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.579283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.583545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.585967 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.595372 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8r7\" (UniqueName: \"kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7\") pod \"ovnkube-node-lgvzv\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.598586 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.618174 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.631843 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.650284 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 18:37:16 +0000 UTC, rotation deadline is 2026-11-14 13:14:08.976900112 +0000 UTC Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.650350 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7002h31m51.326553431s for next certificate rotation Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.650880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.696639 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.706634 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:58:08.047370401 +0000 UTC Jan 26 18:42:17 crc kubenswrapper[4770]: W0126 18:42:17.721313 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49551d69_752c_4bcd_b265_d98a3ec92838.slice/crio-c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0 WatchSource:0}: Error finding container c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0: Status 404 returned error can't find the container with id c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0 Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.766854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.766974 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.767296 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.767353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.767412 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:17 crc kubenswrapper[4770]: E0126 18:42:17.767472 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.771499 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.772150 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.772838 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.773431 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.774033 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.774538 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.775200 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.777360 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.778160 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.779368 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.780040 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.783657 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.784493 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.785148 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.786216 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.786886 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.788147 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.788625 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.789377 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.790691 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.791408 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.792188 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.792812 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.793607 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.794168 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.798640 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.799552 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.800029 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.801539 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.802046 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.802986 4770 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.803095 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.804802 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.805906 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.806384 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.807979 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.808681 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.809640 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.810468 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.811677 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.812191 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.813220 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.814491 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.815173 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.815791 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.817150 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.818377 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.819344 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.820399 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.821050 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.821600 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.822781 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.823670 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.824263 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.910630 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kk5wm" event={"ID":"21c84bb4-c720-4d18-bb93-908501f2f39e","Type":"ContainerStarted","Data":"98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.911002 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kk5wm" event={"ID":"21c84bb4-c720-4d18-bb93-908501f2f39e","Type":"ContainerStarted","Data":"92fd03f61da6c91d499389a98e7e252337e2d76c4da4ae0e5e463fb7e1035517"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.911812 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375" exitCode=0 Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.911852 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.911868 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerStarted","Data":"d6d4ae0eeaa0a432abc75bc9324e715730bf39f338be4e44019eb4ff8ddc5458"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.913027 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7" exitCode=0 Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.913068 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.913128 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.914746 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerStarted","Data":"4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.914795 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerStarted","Data":"563d2a34cdd1b3394f886b3db9c14385c1a7b20cd99cf1e7180337a438b45890"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.916385 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.916410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.916420 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"6ecff52813a64f3883b2e9b35fcfd308e1a31f5c18acbc1d4402cda9249f1ca4"} Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.935406 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.954314 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:17 crc kubenswrapper[4770]: I0126 18:42:17.970304 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.005287 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.022970 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.036865 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.065672 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.078079 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.108726 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.135188 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.146928 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.157747 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.169032 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.177587 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.198303 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.234936 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.277244 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.316366 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.360948 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.394287 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.439620 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.486605 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.521930 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.547784 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.554865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.554934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.554948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.556219 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.558446 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.609008 4770 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.609474 4770 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.610374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.610392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.610400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.610412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.610422 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.627536 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.640926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.640968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.640979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.640995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.641006 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.660482 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.660857 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.673535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.673576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.673584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.673599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.673608 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.684109 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.700785 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.706829 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:00:06.104142324 +0000 UTC Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.706928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.706965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.706977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.706991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.707000 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.717031 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.717223 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.725670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.725690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.725715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.725728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.725737 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.736037 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: E0126 18:42:18.736144 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.737817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.737847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.737859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.737874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.737887 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.760321 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.839791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.839820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.839828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.839840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.839849 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.926159 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.926288 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.926363 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.926419 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.928547 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087" exitCode=0 Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.928591 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.942039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.942065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.942073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.942085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.942093 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:18Z","lastTransitionTime":"2026-01-26T18:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.961179 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.975327 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:18 crc kubenswrapper[4770]: I0126 18:42:18.991721 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.009623 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.021574 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.035433 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.044452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.044489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.044500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.044516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.044528 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.049833 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.075967 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.120122 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.146872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.146900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.146910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.146925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.146934 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.158334 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.194368 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.237736 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.248762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.248814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.248829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.248851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.248865 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.276970 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.317212 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.351256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.351478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.351538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.351669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.351780 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.394235 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.394433 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:23.394395416 +0000 UTC m=+27.959302188 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.394590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.394818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.394960 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.395066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.394920 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395296 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395398 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395561 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:23.395543866 +0000 UTC m=+27.960450608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395084 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395758 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395843 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395099 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395155 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.395967 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:23.395955627 +0000 UTC m=+27.960862369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.396134 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:23.396106141 +0000 UTC m=+27.961012883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.396163 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:23.396154112 +0000 UTC m=+27.961060854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.454028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.454082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.454099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.454122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.454141 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.556550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.556614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.556652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.556676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.556694 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.659509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.659593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.659604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.659619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.659628 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.707263 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:32:11.173569548 +0000 UTC Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.762268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.762319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.762332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.762350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.762362 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.766551 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.766655 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.766835 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.766878 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.766940 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:19 crc kubenswrapper[4770]: E0126 18:42:19.767011 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.865278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.865329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.865345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.865370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.865388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.937379 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.937685 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.940588 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1" exitCode=0 Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.940660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.967872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.967908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.967919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.967934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.967946 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:19Z","lastTransitionTime":"2026-01-26T18:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.976479 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:19 crc kubenswrapper[4770]: I0126 18:42:19.992753 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:19Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.003644 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-b6qql"] Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.003970 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.005490 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.005602 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.005754 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.005808 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.010788 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.027249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.039551 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.053824 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.065725 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.071825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.071860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.071871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.071892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.071906 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.075486 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.087746 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.096977 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.104538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpw7\" (UniqueName: \"kubernetes.io/projected/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-kube-api-access-4jpw7\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.104663 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-serviceca\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.104726 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-host\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.110549 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.124522 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.139191 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.150453 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.164876 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.174179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.174238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.174262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.174290 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.174312 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.181643 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.194645 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.207116 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-serviceca\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.207168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-host\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.207228 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jpw7\" (UniqueName: \"kubernetes.io/projected/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-kube-api-access-4jpw7\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.207358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-host\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.208899 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-serviceca\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.210908 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.224778 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.228750 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jpw7\" (UniqueName: \"kubernetes.io/projected/b05a08e3-3ed4-479f-8b88-acf1d7868c9e-kube-api-access-4jpw7\") pod \"node-ca-b6qql\" (UID: \"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\") " pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.239181 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.255293 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.277240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.277273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.277282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.277295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.277304 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.302400 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.315529 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b6qql" Jan 26 18:42:20 crc kubenswrapper[4770]: W0126 18:42:20.333730 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05a08e3_3ed4_479f_8b88_acf1d7868c9e.slice/crio-9c8c7a5a9154e796f223025b849d43ef5988ecd10cc2832830dd40336b58edbb WatchSource:0}: Error finding container 9c8c7a5a9154e796f223025b849d43ef5988ecd10cc2832830dd40336b58edbb: Status 404 returned error can't find the container with id 9c8c7a5a9154e796f223025b849d43ef5988ecd10cc2832830dd40336b58edbb Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.344103 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.383406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.383451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.383463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.383481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.383493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.386488 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.416799 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.465091 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.486851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.486918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.486939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.486964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.486981 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.498015 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.544215 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.589890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.589929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.589940 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.589955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.589965 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.615257 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.692754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.692786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.692794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.692807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.692815 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.708139 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:32:39.135610255 +0000 UTC Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.795646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.795720 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.795732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.795747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.795759 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.897924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.897982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.898001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.898028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.898045 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:20Z","lastTransitionTime":"2026-01-26T18:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.949816 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b6qql" event={"ID":"b05a08e3-3ed4-479f-8b88-acf1d7868c9e","Type":"ContainerStarted","Data":"6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.949866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b6qql" event={"ID":"b05a08e3-3ed4-479f-8b88-acf1d7868c9e","Type":"ContainerStarted","Data":"9c8c7a5a9154e796f223025b849d43ef5988ecd10cc2832830dd40336b58edbb"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.953440 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c" exitCode=0 Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.953489 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c"} Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.969224 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:20 crc kubenswrapper[4770]: I0126 18:42:20.986284 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.000316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.000366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.000380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.000398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.000412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.019301 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.034764 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.052656 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.069737 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.085110 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.102717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.102783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.102801 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.102827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.102839 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.105735 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.133062 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.172491 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.191001 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.205842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.205900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.205959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.205978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.206014 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.206207 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.221989 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.238721 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.251940 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.267273 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.282940 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.306631 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.308158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.308196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.308207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.308221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.308233 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.349240 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.378762 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.410786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.410896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.410952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.411019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.411074 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.420051 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.457596 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.496978 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.513590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.513641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.513650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.513663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.513672 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.534623 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.577052 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.615329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.615357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.615364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.615377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.615386 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.617457 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.657914 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.703925 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.708401 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:54:03.237202356 +0000 UTC Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.718818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.718865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.718883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.718897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.718906 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.739501 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.766638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.766651 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.766772 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:21 crc kubenswrapper[4770]: E0126 18:42:21.766923 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:21 crc kubenswrapper[4770]: E0126 18:42:21.766964 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:21 crc kubenswrapper[4770]: E0126 18:42:21.767025 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.777654 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.821117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.821158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.821167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.821182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.821193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.924312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.924357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.924367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.924385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.924398 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:21Z","lastTransitionTime":"2026-01-26T18:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.961577 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5" exitCode=0 Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.961646 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.976451 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.976711 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f"} Jan 26 18:42:21 crc kubenswrapper[4770]: I0126 18:42:21.992911 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:21Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.014259 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.028162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.028196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.028207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.028222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.028233 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.041857 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.054997 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.069472 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.088164 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.102778 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.130232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.130296 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.130311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.130326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.130335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.138468 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.185269 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.224000 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.233050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.233118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.233132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.233160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.233174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.258460 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.303857 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.335834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.335871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.335890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.335904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.335918 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.341051 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.374510 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:22Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.439654 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.439713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.439723 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.439741 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.439752 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.542870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.542913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.542922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.542936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.542946 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.645921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.645998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.646016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.646047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.646073 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.708778 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:37:55.419957574 +0000 UTC Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.749044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.749179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.749626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.750051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.750107 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.853114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.853176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.853197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.853220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.853273 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.956370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.956417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.956429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.956447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.956460 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:22Z","lastTransitionTime":"2026-01-26T18:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.986616 4770 generic.go:334] "Generic (PLEG): container finished" podID="3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e" containerID="d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af" exitCode=0 Jan 26 18:42:22 crc kubenswrapper[4770]: I0126 18:42:22.986687 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerDied","Data":"d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.005611 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.026594 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.040544 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.059289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.059342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.059357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.059377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.059392 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.062381 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.077112 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.090931 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.103073 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.118921 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.139221 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.156984 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.161651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.161690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.161715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.161732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.161743 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.168253 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.178438 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.192513 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.204113 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.215546 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:23Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.264209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.264257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.264273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.264290 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.264299 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.366767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.366812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.366821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.366834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.366843 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.444942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.445103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.445174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445289 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.445243566 +0000 UTC m=+36.010150348 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445318 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445403 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.445426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445513 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.445481492 +0000 UTC m=+36.010388254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445598 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445635 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445655 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445780 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.445721698 +0000 UTC m=+36.010628470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445855 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.445835301 +0000 UTC m=+36.010742133 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.445794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445864 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445910 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.445937 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.446023 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.446004855 +0000 UTC m=+36.010911617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.469539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.469617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.469629 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.469646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.469658 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.573209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.573292 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.573491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.573527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.573550 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.677005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.677071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.677094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.677122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.677143 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.709921 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:18:39.898303237 +0000 UTC Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.766233 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.766365 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.766399 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.766567 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.766651 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:23 crc kubenswrapper[4770]: E0126 18:42:23.766788 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.778949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.778976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.778985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.778999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.779008 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.880904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.880956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.880974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.880995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.881011 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.984083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.984138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.984154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.984178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.984196 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:23Z","lastTransitionTime":"2026-01-26T18:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:23 crc kubenswrapper[4770]: I0126 18:42:23.994942 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" event={"ID":"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e","Type":"ContainerStarted","Data":"653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.001079 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.001340 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.001482 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.005113 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.018249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.038055 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.052970 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.065926 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.078998 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.096396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.096450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.096461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.096480 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.096492 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.100097 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.101269 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.102397 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.135817 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.149126 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.159777 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.173827 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.188290 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.199448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.199503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.199522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.199546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.199569 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.200557 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.212094 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.224072 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.236394 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.248326 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.260317 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.276183 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.297907 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.301279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.301324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.301335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.301355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.301367 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.312266 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.326845 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.344958 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.357281 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.368891 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.382634 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.394604 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.403836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.403875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.403885 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.403901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.403910 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.406815 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.420078 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.430603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.440775 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:24Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.506688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.506764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.506782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.506802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.506818 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.609072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.609138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.609155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.609177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.609193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.710179 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:07:34.260573878 +0000 UTC Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.712512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.712597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.712620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.712651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.712675 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.815508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.815592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.815617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.815646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.815667 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.919289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.919342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.919354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.919374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:24 crc kubenswrapper[4770]: I0126 18:42:24.919384 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:24Z","lastTransitionTime":"2026-01-26T18:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.009253 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.022668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.022755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.022773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.022795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.022811 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.125832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.125873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.125882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.125897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.125907 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.229057 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.229131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.229149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.229177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.229194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.331848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.331885 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.331898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.331911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.331920 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.433981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.434038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.434060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.434084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.434102 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.536485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.536534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.536544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.536577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.536591 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.602297 4770 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.638917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.638951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.638960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.638973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.638982 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.710882 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:36:32.898382158 +0000 UTC Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.741552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.741609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.741632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.741671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.741693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.766374 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:25 crc kubenswrapper[4770]: E0126 18:42:25.766541 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.766571 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:25 crc kubenswrapper[4770]: E0126 18:42:25.766692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.766373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:25 crc kubenswrapper[4770]: E0126 18:42:25.766853 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.799115 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.816439 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.830095 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.843983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.844016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.844027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.844042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.844054 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.851940 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.868459 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.883399 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.900373 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.913932 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.930137 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.946210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.946246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.946255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.946268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.946277 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:25Z","lastTransitionTime":"2026-01-26T18:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.953387 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.965143 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.978288 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:25 crc kubenswrapper[4770]: I0126 18:42:25.995469 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.011893 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.012745 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.030326 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.049564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.049648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.049687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.049724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.049736 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.152351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.152422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.152433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.152450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.152462 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.261446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.261529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.261552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.261586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.261607 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.364848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.364927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.364952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.364984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.365004 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.467549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.467598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.467607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.467620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.467629 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.570123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.570185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.570202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.570224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.570242 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.672645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.672711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.672722 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.672739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.672748 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.711039 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:03:15.675780921 +0000 UTC Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.774903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.774975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.775002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.775031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.775055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.878099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.878164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.878178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.878203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.878217 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.980828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.980886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.980903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.980927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:26 crc kubenswrapper[4770]: I0126 18:42:26.980941 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:26Z","lastTransitionTime":"2026-01-26T18:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.016673 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/0.log" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.019994 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4" exitCode=1 Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.020040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.020863 4770 scope.go:117] "RemoveContainer" containerID="05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.040131 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.061337 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.074290 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.083495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.083913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.083922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.083938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.083948 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.087335 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.103022 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.123471 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.141602 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.155363 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.169575 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.186436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.186488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.186500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.186524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.186537 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.188778 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.201749 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.221103 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.233558 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.247353 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.267921 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:27Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.288992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.289052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.289069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.289094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.289110 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.392358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.392407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.392426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.392500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.392520 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.495344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.495407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.495429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.495456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.495481 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.597768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.599828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.599849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.599879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.599893 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.706328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.706365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.706376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.706390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.706399 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.712108 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:30:17.460143515 +0000 UTC Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.766574 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.766574 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:27 crc kubenswrapper[4770]: E0126 18:42:27.766718 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:27 crc kubenswrapper[4770]: E0126 18:42:27.766757 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.766598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:27 crc kubenswrapper[4770]: E0126 18:42:27.766820 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.808103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.808135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.808145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.808158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.808167 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.910880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.910955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.910975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.911000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:27 crc kubenswrapper[4770]: I0126 18:42:27.911018 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:27Z","lastTransitionTime":"2026-01-26T18:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.013959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.014015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.014035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.014057 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.014074 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.027320 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/0.log" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.029540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.029646 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.044364 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.060862 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.072277 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.084058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.102860 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.116511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.116579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.116595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.116622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.116636 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.117082 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.131279 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.151195 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.164471 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.178854 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.195626 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.206747 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.218651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.218683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.218709 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.218725 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.218735 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.222139 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.232629 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.244813 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.321858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.321926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.321950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.321976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.321999 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.425017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.425085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.425103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.425125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.425142 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.528581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.528681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.528764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.528795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.528818 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.632464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.632541 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.632568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.632598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.632622 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.712255 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:08:27.799394578 +0000 UTC Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.735935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.736011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.736037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.736063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.736081 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.841897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.841955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.841970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.841991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.842014 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.927208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.927276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.927311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.927335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.927409 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: E0126 18:42:28.943560 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.948854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.948895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.948909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.948931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.948944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: E0126 18:42:28.965387 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.969531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.969565 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.969574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.969590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.969600 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:28 crc kubenswrapper[4770]: E0126 18:42:28.983884 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:28Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.987713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.987739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.987747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.987761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:28 crc kubenswrapper[4770]: I0126 18:42:28.987770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:28Z","lastTransitionTime":"2026-01-26T18:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.003635 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.007289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.007315 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.007324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.007337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.007346 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.023498 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.023648 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.030447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.030483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.030491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.030522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.030534 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.034888 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/1.log" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.035499 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/0.log" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.038907 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef" exitCode=1 Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.038945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.038992 4770 scope.go:117] "RemoveContainer" containerID="05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.040144 4770 scope.go:117] "RemoveContainer" containerID="0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef" Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.040416 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.057782 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.075032 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.088301 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.114378 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.133429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.133493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.133514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.133539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.133553 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.139990 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.158004 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.179882 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.195589 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.211828 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.230295 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.235888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.235958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.235980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.236005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.236023 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.246206 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.267521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.290080 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.307858 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.326596 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:29Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.338854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.338905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.338917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.338934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.338945 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.442052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.442135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.442205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.442231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.442283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.545668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.545780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.545805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.545829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.545845 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.649373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.649412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.649421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.649437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.649447 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.712595 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:25:38.381694765 +0000 UTC Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.752384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.752419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.752435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.752458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.752475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.767145 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.767188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.767304 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.767347 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.767507 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:29 crc kubenswrapper[4770]: E0126 18:42:29.767624 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.854563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.854622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.854633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.854647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.854658 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.958099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.958166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.958185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.958211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:29 crc kubenswrapper[4770]: I0126 18:42:29.958229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:29Z","lastTransitionTime":"2026-01-26T18:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.046184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/1.log" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.061015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.061072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.061081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.061094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.061102 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.156131 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm"] Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.156596 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.160147 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.162457 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.164510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.164604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.164623 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.164692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.164765 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.171624 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.171762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50d06408-0503-4a23-a417-dff17ebd0e1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.171816 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.171846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snm8\" (UniqueName: \"kubernetes.io/projected/50d06408-0503-4a23-a417-dff17ebd0e1c-kube-api-access-8snm8\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.179638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.199782 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.214508 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.231053 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.252867 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.267974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.268045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.268064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.268094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.268120 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.271645 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.272938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.273934 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50d06408-0503-4a23-a417-dff17ebd0e1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.274173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.274211 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8snm8\" (UniqueName: \"kubernetes.io/projected/50d06408-0503-4a23-a417-dff17ebd0e1c-kube-api-access-8snm8\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.274226 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.275256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50d06408-0503-4a23-a417-dff17ebd0e1c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.290311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50d06408-0503-4a23-a417-dff17ebd0e1c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.294085 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.296411 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snm8\" (UniqueName: \"kubernetes.io/projected/50d06408-0503-4a23-a417-dff17ebd0e1c-kube-api-access-8snm8\") pod \"ovnkube-control-plane-749d76644c-5hkhm\" (UID: \"50d06408-0503-4a23-a417-dff17ebd0e1c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.306451 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.319409 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.335578 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.351934 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.364956 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.370775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.370806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.370815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.370829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.370838 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.385024 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.400403 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.415888 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.444243 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:30Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.471387 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.472920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.472951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.472960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.472974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.472983 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.576117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.576168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.576183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.576205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.576217 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.678377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.678419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.678434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.678452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.678467 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.713169 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:28:44.215634415 +0000 UTC Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.782607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.782643 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.782652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.782666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.782676 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.884832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.885306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.885328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.885353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.885372 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.987649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.987688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.987717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.987734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:30 crc kubenswrapper[4770]: I0126 18:42:30.987745 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:30Z","lastTransitionTime":"2026-01-26T18:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.055003 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" event={"ID":"50d06408-0503-4a23-a417-dff17ebd0e1c","Type":"ContainerStarted","Data":"e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.055086 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" event={"ID":"50d06408-0503-4a23-a417-dff17ebd0e1c","Type":"ContainerStarted","Data":"7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.055108 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" event={"ID":"50d06408-0503-4a23-a417-dff17ebd0e1c","Type":"ContainerStarted","Data":"f96ccd4f4354cd6a0b9304e51e65fc4116a82b7dae6baac3157fc4467bd7ac60"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.068802 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.085324 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.090666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.090716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.090725 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.090739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.090749 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.105021 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.115500 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.128930 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.142998 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.158950 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.173956 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.193337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.193386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.193398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.193417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.193450 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.196100 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.214068 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.224734 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.234521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.245555 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.258466 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.264715 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bqfpk"] Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.265306 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.265400 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.270769 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.281035 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.296168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.296221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.296236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.296258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.296271 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.297198 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.306950 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.317823 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.329212 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.339346 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.348730 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.363152 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.386948 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.387072 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwq5\" (UniqueName: \"kubernetes.io/projected/f836a816-01c1-448b-9736-c65a8f4f0044-kube-api-access-ljwq5\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.388851 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.399345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.399387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.399395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.399409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.399420 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.401031 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.411839 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.427569 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.442667 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.456530 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.478880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.487956 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488092 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488229 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488262 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.48822508 +0000 UTC m=+52.053131852 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488315 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwq5\" (UniqueName: \"kubernetes.io/projected/f836a816-01c1-448b-9736-c65a8f4f0044-kube-api-access-ljwq5\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488338 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488362 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488390 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488400 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.488380214 +0000 UTC m=+52.053286986 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488405 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488457 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488490 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488500 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488464 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.488449196 +0000 UTC m=+52.053355928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488504 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488548 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.488541238 +0000 UTC m=+52.053447970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.488551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488595 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488620 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:31.98860456 +0000 UTC m=+36.553511292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.488634 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.48862749 +0000 UTC m=+52.053534222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.490272 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.501986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.502016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.502028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.502045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.502057 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.508363 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwq5\" (UniqueName: \"kubernetes.io/projected/f836a816-01c1-448b-9736-c65a8f4f0044-kube-api-access-ljwq5\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.508989 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.522470 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.604599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.604634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.604642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.604655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.604664 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.707285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.707325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.707335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.707351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.707364 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.713711 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:20:14.925563816 +0000 UTC Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.766781 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.766835 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.766825 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.767042 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.767171 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.767303 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.811157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.811227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.811252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.811281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.811302 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.913273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.913347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.913360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.913380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.913390 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:31Z","lastTransitionTime":"2026-01-26T18:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:31 crc kubenswrapper[4770]: I0126 18:42:31.995329 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.995605 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:31 crc kubenswrapper[4770]: E0126 18:42:31.995773 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:32.995735913 +0000 UTC m=+37.560642685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.015960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.016022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.016045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.016075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.016104 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.119330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.119448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.119470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.119492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.119512 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.221967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.222027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.222041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.222065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.222079 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.325355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.325434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.325453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.325478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.325497 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.429011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.429103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.429121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.429147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.429166 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.532764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.532824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.532842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.532865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.532883 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.636081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.636145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.636163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.636188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.636206 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.714647 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:15:42.553115826 +0000 UTC Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.739329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.739420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.739434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.739453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.739467 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.766373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:32 crc kubenswrapper[4770]: E0126 18:42:32.766602 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.843607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.843675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.843726 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.843757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.843778 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.947339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.947428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.947453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.947481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:32 crc kubenswrapper[4770]: I0126 18:42:32.947502 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:32Z","lastTransitionTime":"2026-01-26T18:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.007900 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:33 crc kubenswrapper[4770]: E0126 18:42:33.008158 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:33 crc kubenswrapper[4770]: E0126 18:42:33.008252 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:35.008223254 +0000 UTC m=+39.573130026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.050157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.050206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.050220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.050238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.050274 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.153001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.153053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.153066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.153082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.153115 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.256779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.256860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.256877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.256902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.256922 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.360103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.360155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.360167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.360182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.360193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.463810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.463878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.463910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.463936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.463954 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.567195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.567255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.567274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.567299 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.567316 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.670649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.670690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.670740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.670758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.670770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.715586 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:22:07.096589572 +0000 UTC Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.766441 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.766519 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.766441 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:33 crc kubenswrapper[4770]: E0126 18:42:33.766685 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:33 crc kubenswrapper[4770]: E0126 18:42:33.766796 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:33 crc kubenswrapper[4770]: E0126 18:42:33.766868 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.773059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.773121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.773148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.773174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.773192 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.876211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.876278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.876295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.876330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.876346 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.979955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.980023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.980040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.980064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:33 crc kubenswrapper[4770]: I0126 18:42:33.980081 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:33Z","lastTransitionTime":"2026-01-26T18:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.083741 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.083805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.083824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.083852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.083872 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.187794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.187853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.187869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.187891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.187908 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.290693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.291153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.291318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.291472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.291830 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.394922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.394974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.394992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.395017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.395033 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.497762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.498105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.498246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.498370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.498489 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.601735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.602092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.602235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.602377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.602495 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.705876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.705964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.705986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.706012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.706029 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.716201 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:54:41.343701177 +0000 UTC Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.766148 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:34 crc kubenswrapper[4770]: E0126 18:42:34.766331 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.808534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.808608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.808631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.808658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.808679 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.911952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.912021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.912045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.912076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:34 crc kubenswrapper[4770]: I0126 18:42:34.912102 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:34Z","lastTransitionTime":"2026-01-26T18:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.015426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.015499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.015522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.015552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.015574 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.033276 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:35 crc kubenswrapper[4770]: E0126 18:42:35.033499 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:35 crc kubenswrapper[4770]: E0126 18:42:35.033585 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:39.033554282 +0000 UTC m=+43.598461054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.118041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.118103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.118121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.118145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.118164 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.220542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.220926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.221060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.221186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.221311 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.324065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.324110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.324126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.324160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.324178 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.427372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.427430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.427452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.427479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.427501 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.535004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.535248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.535273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.535301 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.535402 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.600808 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.626067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.638389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.638444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.638469 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.638494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.638512 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.644230 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.661518 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.683813 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.702347 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.717062 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:56:30.511104048 +0000 UTC Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.721181 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.741132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.741185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.741198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.741214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.741224 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.751890 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.766751 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.766783 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:35 crc kubenswrapper[4770]: E0126 18:42:35.766895 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.767205 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:35 crc kubenswrapper[4770]: E0126 18:42:35.767300 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:35 crc kubenswrapper[4770]: E0126 18:42:35.767364 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.767615 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.784754 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.803290 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.821871 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.843666 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.844098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.844133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.844150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.844174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.844194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.865854 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.884838 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.901363 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.915361 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.932650 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.946231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.946278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.946293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.946310 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.946321 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:35Z","lastTransitionTime":"2026-01-26T18:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.948693 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.965336 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.985077 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05318bb01e4d118eda4e13fb8b9de8742cc878dfc78da45e7900c7c3810da9d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:26Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:25.935634 6062 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 18:42:25.935664 6062 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 18:42:25.935718 6062 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 18:42:25.935689 6062 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:25.935742 6062 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 18:42:25.935752 6062 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:25.935746 6062 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:25.935768 6062 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:25.935800 6062 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 18:42:25.935809 6062 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 18:42:25.935834 6062 factory.go:656] Stopping watch factory\\\\nI0126 18:42:25.935845 6062 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:25.935851 6062 ovnkube.go:599] Stopped ovnkube\\\\nI0126 18:42:25.935858 6062 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:25.935868 6062 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:35 crc kubenswrapper[4770]: I0126 18:42:35.999063 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.028203 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048290 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048301 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.048862 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.065169 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.080927 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.103239 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.119774 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.133455 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.145027 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.150754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.150819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.150840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.150867 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.150885 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.161209 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.179979 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.197003 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.207335 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.222910 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.253172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.253205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.253213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.253224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.253233 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.356593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.356630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.356637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.356651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.356659 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.459540 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.459620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.459645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.459678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.459740 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.562633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.562749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.562776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.562812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.562840 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.666121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.666199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.666224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.666254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.666277 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.717784 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:02:36.744684954 +0000 UTC Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.766978 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:36 crc kubenswrapper[4770]: E0126 18:42:36.767232 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.770047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.770110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.770129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.770155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.770173 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.872876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.872937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.872959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.872983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.872999 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.976622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.976767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.976795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.976826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:36 crc kubenswrapper[4770]: I0126 18:42:36.976852 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:36Z","lastTransitionTime":"2026-01-26T18:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.079321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.079391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.079407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.079431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.079453 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.182838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.182903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.182924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.182953 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.182974 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.286098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.286165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.286182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.286205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.286221 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.389672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.389786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.389822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.389854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.389873 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.493602 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.493649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.493657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.493676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.493688 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.597041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.597111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.597131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.597155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.597171 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.700017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.700065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.700077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.700096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.700107 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.718689 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:36:09.589311919 +0000 UTC Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.766661 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:37 crc kubenswrapper[4770]: E0126 18:42:37.766828 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.766883 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.766920 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:37 crc kubenswrapper[4770]: E0126 18:42:37.767006 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:37 crc kubenswrapper[4770]: E0126 18:42:37.767770 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.802361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.802403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.802411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.802426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.802435 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.905065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.905124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.905140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.905199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:37 crc kubenswrapper[4770]: I0126 18:42:37.905225 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:37Z","lastTransitionTime":"2026-01-26T18:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.007186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.007235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.007251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.007272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.007289 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.109576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.109629 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.109646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.109668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.109686 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.212434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.212492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.212509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.212531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.212548 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.315157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.315214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.315230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.315256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.315273 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.417688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.417780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.417804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.417830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.417851 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.521360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.521419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.521435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.521456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.521472 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.624196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.624260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.624281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.624312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.624337 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.719102 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:29:12.50396654 +0000 UTC Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.727653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.727733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.727757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.727784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.727804 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.766400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:38 crc kubenswrapper[4770]: E0126 18:42:38.766608 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.830565 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.830606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.830614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.830628 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.830637 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.933865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.933922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.933933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.933951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:38 crc kubenswrapper[4770]: I0126 18:42:38.933962 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:38Z","lastTransitionTime":"2026-01-26T18:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.036054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.036092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.036104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.036120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.036131 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.076855 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.077061 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.077180 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:42:47.077146801 +0000 UTC m=+51.642053573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.138242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.138306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.138323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.138350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.138381 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.240795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.240836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.240844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.240857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.240868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.304255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.304298 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.304308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.304322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.304333 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.324055 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.328331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.328384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.328398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.328417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.328432 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.347154 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.352182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.352247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.352283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.352319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.352344 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.375241 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.381032 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.381093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.381105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.381155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.381175 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.400564 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.405395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.405457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.405475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.405503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.405523 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.423143 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:39Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.423378 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.425268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.425313 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.425328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.425347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.425361 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.530107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.530201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.530229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.530267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.530303 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.633911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.633983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.634001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.634028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.634048 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.719877 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:15:44.578078148 +0000 UTC Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.737496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.737552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.737571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.737597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.737616 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.766557 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.766558 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.766733 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.766829 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.766851 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:39 crc kubenswrapper[4770]: E0126 18:42:39.767046 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.840744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.840826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.840850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.840880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.840905 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.944622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.944679 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.944729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.944754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:39 crc kubenswrapper[4770]: I0126 18:42:39.944770 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:39Z","lastTransitionTime":"2026-01-26T18:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.046745 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.046781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.046789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.046802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.046810 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.150728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.150780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.150793 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.150813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.150828 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.254181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.254219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.254227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.254242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.254252 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.356750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.356830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.356871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.356913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.356935 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.459315 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.459362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.459369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.459384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.459392 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.561688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.561818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.561841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.561869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.561890 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.664889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.664931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.664946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.664961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.664971 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.721060 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:58:28.055880513 +0000 UTC Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.766128 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:40 crc kubenswrapper[4770]: E0126 18:42:40.766375 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.768836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.768904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.768934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.768962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.768987 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.872231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.872319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.872345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.872376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.872399 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.975353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.975417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.975434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.975458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:40 crc kubenswrapper[4770]: I0126 18:42:40.975475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:40Z","lastTransitionTime":"2026-01-26T18:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.079246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.079317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.079333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.079357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.079374 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.182694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.182793 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.182811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.182836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.182854 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.285355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.285413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.285430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.285453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.285476 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.389366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.389424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.389435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.389455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.389467 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.492525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.492589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.492603 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.492624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.492639 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.595647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.595691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.595728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.595746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.595757 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.698021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.698074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.698083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.698095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.698107 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.721867 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:15:10.544341075 +0000 UTC Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.766938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.767035 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.767173 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:41 crc kubenswrapper[4770]: E0126 18:42:41.767169 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:41 crc kubenswrapper[4770]: E0126 18:42:41.767380 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:41 crc kubenswrapper[4770]: E0126 18:42:41.768134 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.768866 4770 scope.go:117] "RemoveContainer" containerID="0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.801567 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.802667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.802740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.802761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.802788 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.802807 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.824197 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.836281 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.855873 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.874280 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.893443 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.905640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.905772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.905804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.905839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.905865 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:41Z","lastTransitionTime":"2026-01-26T18:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.922572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.937444 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.954523 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.968335 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.981147 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:41 crc kubenswrapper[4770]: I0126 18:42:41.995030 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:41Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.008462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.008486 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.008496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.008516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.008527 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.012469 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.028507 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.046358 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.062297 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.079447 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.098880 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/1.log" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.102040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.102237 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.110769 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.110945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.111055 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.111158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.111244 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.117550 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.131540 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.150624 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.176781 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.204299 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.214293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.214520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.214601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.214736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.214833 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.234804 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.260659 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.282442 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.299363 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.319248 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.325645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.325673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.325688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.325721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.325731 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.330513 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.349237 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.361533 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.371058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.382167 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.393927 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.418182 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:42Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.428040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.428069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.428077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.428090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.428099 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.530867 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.531139 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.531258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.531359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.531445 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.633950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.633998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.634009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.634025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.634037 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.722737 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:05:57.04497603 +0000 UTC Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.737002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.737045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.737061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.737081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.737094 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.767074 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:42 crc kubenswrapper[4770]: E0126 18:42:42.767224 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.839921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.839976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.839989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.840008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.840022 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.943412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.943467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.943481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.943506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:42 crc kubenswrapper[4770]: I0126 18:42:42.943520 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:42Z","lastTransitionTime":"2026-01-26T18:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.046271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.046923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.047135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.047311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.047480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.150416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.150481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.150505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.150537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.150564 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.254107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.254176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.254199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.254227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.254250 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.357759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.357818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.357835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.357856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.357874 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.460248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.460313 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.460330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.460353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.460370 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.562766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.562833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.562856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.562888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.562909 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.666620 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.666681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.666691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.666724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.666735 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.723542 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:54:57.679620233 +0000 UTC Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.766188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.766235 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.766235 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:43 crc kubenswrapper[4770]: E0126 18:42:43.766433 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:43 crc kubenswrapper[4770]: E0126 18:42:43.766517 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:43 crc kubenswrapper[4770]: E0126 18:42:43.766794 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.770846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.770981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.771004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.771031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.771052 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.874754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.874824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.874848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.874877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.874900 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.977357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.977408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.977425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.977445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:43 crc kubenswrapper[4770]: I0126 18:42:43.977460 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:43Z","lastTransitionTime":"2026-01-26T18:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.080129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.080201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.080223 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.080247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.080265 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.111275 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/2.log" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.112152 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/1.log" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.115971 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" exitCode=1 Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.116016 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.116059 4770 scope.go:117] "RemoveContainer" containerID="0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.117356 4770 scope.go:117] "RemoveContainer" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" Jan 26 18:42:44 crc kubenswrapper[4770]: E0126 18:42:44.117658 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.149741 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.168034 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.183729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.183768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.183780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.183800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.183813 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.187245 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.219282 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0af97067229ff57176d8c2a05b67606f09bfdb29a692350708ed45ff6c977aef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:28Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143188 6186 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 18:42:28.143734 6186 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 18:42:28.144188 6186 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 18:42:28.144235 6186 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 18:42:28.144271 6186 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 18:42:28.144309 6186 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 18:42:28.144338 6186 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 18:42:28.144395 6186 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 18:42:28.144403 6186 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 18:42:28.144420 6186 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 18:42:28.144435 6186 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 18:42:28.144469 6186 factory.go:656] Stopping watch factory\\\\nI0126 18:42:28.144491 6186 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.235601 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.252306 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.274360 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.287755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.287821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.287845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.287880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.287902 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.291043 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.309825 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.328329 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.349535 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.372888 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.387770 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.391515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.391581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.391600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.391626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.391644 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.404910 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.422175 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.443438 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.461024 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:44Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.494369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.494428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.494448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.494480 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.494504 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.597477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.597548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.597573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.597602 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.597623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.700617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.700774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.700797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.700830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.700857 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.724920 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:54:55.253569573 +0000 UTC Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.767300 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:44 crc kubenswrapper[4770]: E0126 18:42:44.767504 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.804758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.804854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.804877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.804906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.804924 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.907919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.909259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.909423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.909582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:44 crc kubenswrapper[4770]: I0126 18:42:44.909776 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:44Z","lastTransitionTime":"2026-01-26T18:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.013077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.013157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.013180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.013212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.013235 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.115900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.116270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.116471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.116683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.116892 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.123564 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/2.log" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.220433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.220519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.220543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.220637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.220672 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.324389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.324459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.324484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.324514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.324534 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.428075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.428156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.428178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.428211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.428232 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.468649 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.470123 4770 scope.go:117] "RemoveContainer" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" Jan 26 18:42:45 crc kubenswrapper[4770]: E0126 18:42:45.470401 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.492192 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.513078 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.530684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.530774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.530794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.530819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.530838 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.533811 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.566351 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.586120 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.607433 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.633691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.633763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.633774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.633792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.633803 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.639457 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.655450 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.677056 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.699972 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.719250 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.725102 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:51:29.905468222 +0000 UTC Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.736913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.736986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.737017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.737036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.737047 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.741410 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.760836 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.766227 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.766252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.766377 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:45 crc kubenswrapper[4770]: E0126 18:42:45.766539 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:45 crc kubenswrapper[4770]: E0126 18:42:45.766627 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:45 crc kubenswrapper[4770]: E0126 18:42:45.766807 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.787757 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.804349 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.821393 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.839165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.839204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.839220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.839244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.839261 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.842093 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.858328 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.873812 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.893637 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.908805 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.921387 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.940981 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.941827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.941978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.942166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.942292 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.942388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:45Z","lastTransitionTime":"2026-01-26T18:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.955093 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.965101 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.975991 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:45 crc kubenswrapper[4770]: I0126 18:42:45.991188 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:45Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.002884 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.014625 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.037876 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.045276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.045483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.045548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.045624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.047899 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.057192 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.074769 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.098589 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.110425 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:46Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.151056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.151109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.151126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.151149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.151167 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.252759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.252798 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.252810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.252825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.252837 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.355081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.355140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.355201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.355226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.355243 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.458731 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.458794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.458813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.458838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.458855 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.561825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.561899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.561919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.561945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.561962 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.664400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.664453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.664464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.664482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.664495 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.726607 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:28:50.100645067 +0000 UTC Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.766027 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:46 crc kubenswrapper[4770]: E0126 18:42:46.766201 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.767349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.767419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.767446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.767489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.767514 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.869669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.869736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.869752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.869775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.869793 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.971973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.972020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.972031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.972047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:46 crc kubenswrapper[4770]: I0126 18:42:46.972064 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:46Z","lastTransitionTime":"2026-01-26T18:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.074445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.074495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.074508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.074527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.074541 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.173975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.174182 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.174284 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:03.174262524 +0000 UTC m=+67.739169266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.176633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.176730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.176753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.176775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.176786 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.279483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.279525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.279536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.279551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.279563 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.382091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.382427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.382439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.382458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.382470 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.485503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.485825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.485997 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.486173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.486362 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.578505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.578743 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:43:19.578675876 +0000 UTC m=+84.143582648 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.578821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.578892 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.578973 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.579039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579080 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579120 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579146 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579149 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579233 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:19.57920411 +0000 UTC m=+84.144110902 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579241 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579271 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579293 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579323 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579274 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:19.579253761 +0000 UTC m=+84.144160633 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579369 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:19.579350765 +0000 UTC m=+84.144257537 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.579395 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:19.579381275 +0000 UTC m=+84.144288047 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.589238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.589286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.589304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.589326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.589342 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.691686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.691791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.691815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.691844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.691862 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.727343 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:05:29.527773658 +0000 UTC Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.766513 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.766547 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.766513 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.766631 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.766685 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:47 crc kubenswrapper[4770]: E0126 18:42:47.766878 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.795146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.795193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.795203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.795217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.795227 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.897370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.897687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.897776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.897865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:47 crc kubenswrapper[4770]: I0126 18:42:47.897942 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:47Z","lastTransitionTime":"2026-01-26T18:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.000795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.000840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.000852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.000907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.000923 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.103274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.103344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.103367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.103397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.103423 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.159071 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.172912 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.187374 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.203465 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.205965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.206019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.206032 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.206052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.206064 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.236057 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.255837 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.272606 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.289692 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.306738 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.308473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.308751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.308986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.309494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.309813 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.323879 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.336148 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.347910 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.363217 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.380299 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.391857 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.403323 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.413398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.413449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.413461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.413480 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.413491 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.416573 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.435516 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.449251 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:48Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.516181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.516238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.516255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.516280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.516296 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.618966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.619050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.619071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.619096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.619112 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.721378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.721427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.721443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.721466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.721482 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.727537 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:23:23.69610574 +0000 UTC Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.766095 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:48 crc kubenswrapper[4770]: E0126 18:42:48.766277 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.825062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.825135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.825162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.825190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.825209 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.928959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.929020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.929037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.929062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:48 crc kubenswrapper[4770]: I0126 18:42:48.929079 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:48Z","lastTransitionTime":"2026-01-26T18:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.032532 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.032594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.032612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.032636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.032652 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.135729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.135786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.135804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.135829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.135846 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.238615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.238681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.238735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.238767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.238786 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.342160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.342222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.342244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.342271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.342291 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.444598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.444667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.444689 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.444754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.444776 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.462194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.462230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.462246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.462265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.462283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.486636 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.492767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.492820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.492842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.492869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.492889 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.511863 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.516655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.516749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.516770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.516794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.516810 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.534930 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.540251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.540320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.540336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.540357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.540373 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.562036 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.566774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.566844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.566864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.566890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.566908 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.587471 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:49Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.587769 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.589566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.589630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.589655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.589685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.589746 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.692496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.692569 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.692593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.692623 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.692647 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.728277 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:20:45.063827733 +0000 UTC Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.766959 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.767034 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.767077 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.767194 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.767330 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:49 crc kubenswrapper[4770]: E0126 18:42:49.767512 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.795344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.795395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.795436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.795458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.795475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.898356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.898431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.898449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.898475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:49 crc kubenswrapper[4770]: I0126 18:42:49.898493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:49Z","lastTransitionTime":"2026-01-26T18:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.001883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.001962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.001985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.002014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.002042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.105049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.105096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.105107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.105126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.105136 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.207737 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.207805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.207828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.207857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.207882 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.310218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.310274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.310291 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.310312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.310329 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.413806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.413870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.413887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.413912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.413929 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.516508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.516564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.516580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.516604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.516621 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.619465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.619540 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.619564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.619600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.619623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.722401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.722455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.722472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.722494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.722510 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.729189 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:11:34.327944231 +0000 UTC Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.766808 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:50 crc kubenswrapper[4770]: E0126 18:42:50.766980 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.832033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.832080 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.832094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.832113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.832125 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.935440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.935513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.935536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.935563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:50 crc kubenswrapper[4770]: I0126 18:42:50.935584 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:50Z","lastTransitionTime":"2026-01-26T18:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.038672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.038794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.038813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.038833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.038847 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.142147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.142219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.142244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.142273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.142296 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.246131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.246177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.246190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.246208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.246219 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.349865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.349952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.349974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.350003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.350023 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.453533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.453613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.453648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.453677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.453752 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.557129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.557204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.557230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.557260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.557283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.660024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.660135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.660147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.660163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.660174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.729825 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:03:57.479672676 +0000 UTC Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.763844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.764073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.764104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.764128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.764145 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.766326 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.766374 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:51 crc kubenswrapper[4770]: E0126 18:42:51.766479 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.766322 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:51 crc kubenswrapper[4770]: E0126 18:42:51.766651 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:51 crc kubenswrapper[4770]: E0126 18:42:51.766819 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.866818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.866853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.866862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.866875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.866883 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.969594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.969633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.969645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.969660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:51 crc kubenswrapper[4770]: I0126 18:42:51.969673 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:51Z","lastTransitionTime":"2026-01-26T18:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.073899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.073950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.074004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.074019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.074030 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.177123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.177194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.177215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.177242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.177265 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.280156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.280201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.280210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.280224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.280234 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.384424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.384495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.384516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.384545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.384565 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.487838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.487985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.488007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.488030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.488052 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.591145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.591220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.591257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.591288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.591308 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.694348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.694460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.694478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.694515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.694535 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.730676 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:55:24.262109813 +0000 UTC Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.766112 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:52 crc kubenswrapper[4770]: E0126 18:42:52.766329 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.797155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.797195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.797206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.797222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.797235 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.899357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.899405 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.899426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.899448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:52 crc kubenswrapper[4770]: I0126 18:42:52.899462 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:52Z","lastTransitionTime":"2026-01-26T18:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.002529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.002593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.002604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.002622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.002634 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.106004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.106058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.106074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.106097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.106114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.209176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.209303 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.209321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.209387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.209412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.312052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.312114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.312138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.312167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.312190 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.418271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.418327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.418343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.418366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.418386 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.522270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.522345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.522367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.522397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.522420 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.625918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.625981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.626010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.626038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.626060 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.729597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.729670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.729695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.729761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.729783 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.730934 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:52:14.165420059 +0000 UTC Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.766885 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.767010 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:53 crc kubenswrapper[4770]: E0126 18:42:53.767110 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.767184 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:53 crc kubenswrapper[4770]: E0126 18:42:53.767357 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:53 crc kubenswrapper[4770]: E0126 18:42:53.767417 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.832950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.833017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.833030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.833050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.833063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.936133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.936212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.936232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.936264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:53 crc kubenswrapper[4770]: I0126 18:42:53.936287 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:53Z","lastTransitionTime":"2026-01-26T18:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.039432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.039505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.039527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.039550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.039569 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.142655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.142746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.142774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.142801 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.142822 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.246182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.246320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.246404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.246792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.246868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.350329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.350445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.350476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.350506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.350532 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.453648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.453728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.453753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.453781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.453802 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.556287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.556568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.556673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.556789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.556891 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.660374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.660744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.660872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.661006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.661119 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.732078 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:30:50.72781665 +0000 UTC Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.764727 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.765021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.765105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.765189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.765268 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.766884 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:54 crc kubenswrapper[4770]: E0126 18:42:54.767019 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.868507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.868573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.868588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.868606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.868642 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.972380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.972418 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.972431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.972449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:54 crc kubenswrapper[4770]: I0126 18:42:54.972459 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:54Z","lastTransitionTime":"2026-01-26T18:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.074842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.074887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.074899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.074916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.074928 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.176743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.176789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.176805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.176828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.176846 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.279884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.280229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.280342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.280466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.280578 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.384820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.385173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.385300 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.385422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.385550 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.488880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.488945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.488964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.488992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.489010 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.591737 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.591780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.591792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.591809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.591820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.695453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.695510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.695528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.695549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.695564 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.732872 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:03:48.509810532 +0000 UTC Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.766981 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.766998 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.767390 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:55 crc kubenswrapper[4770]: E0126 18:42:55.767655 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:55 crc kubenswrapper[4770]: E0126 18:42:55.767757 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:55 crc kubenswrapper[4770]: E0126 18:42:55.767845 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.778659 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.788189 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.798394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.798433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.798441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.798455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.798937 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.799675 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.818876 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.831192 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.846370 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.860545 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.877982 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.891770 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.901384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.901457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.901478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.901508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.901529 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:55Z","lastTransitionTime":"2026-01-26T18:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.906208 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.928067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.945444 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.975263 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:55 crc kubenswrapper[4770]: I0126 18:42:55.990537 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:55Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.004919 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:56Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.021820 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:56Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.040094 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:56Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.060638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:56Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.109094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.109179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.109227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.109288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.109314 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.212768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.212838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.212861 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.212892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.212915 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.315590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.315642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.315653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.315667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.315676 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.418358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.418404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.418416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.418432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.418443 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.521809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.521863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.521876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.521896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.521910 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.624456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.624528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.624548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.624578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.624597 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.729488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.729605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.729763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.729810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.729983 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.733685 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:03:22.121795926 +0000 UTC Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.766134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:56 crc kubenswrapper[4770]: E0126 18:42:56.766363 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.833130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.833191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.833208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.833236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.833253 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.936072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.936397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.936482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.936572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:56 crc kubenswrapper[4770]: I0126 18:42:56.936692 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:56Z","lastTransitionTime":"2026-01-26T18:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.039209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.039271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.039281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.039297 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.039307 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.142044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.142112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.142137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.142163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.142184 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.245217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.245282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.245305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.245349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.245374 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.348157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.348213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.348229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.348265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.348286 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.451369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.451518 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.451548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.451574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.451594 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.553877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.554102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.554207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.554280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.554342 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.658446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.658505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.658519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.658538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.658553 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.735363 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:37:27.828491665 +0000 UTC Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.761967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.762054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.762073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.762095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.762112 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.769995 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:57 crc kubenswrapper[4770]: E0126 18:42:57.770338 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.770144 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.770060 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:57 crc kubenswrapper[4770]: E0126 18:42:57.770755 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:57 crc kubenswrapper[4770]: E0126 18:42:57.770923 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.864718 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.864759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.864771 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.864784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.864792 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.968237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.968317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.968341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.968370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:57 crc kubenswrapper[4770]: I0126 18:42:57.968393 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:57Z","lastTransitionTime":"2026-01-26T18:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.071060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.071125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.071143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.071166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.071183 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.173320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.173398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.173413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.173453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.173468 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.275084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.275120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.275131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.275148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.275158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.377712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.377742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.377750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.377763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.377771 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.480856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.480939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.480956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.480977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.480994 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.584413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.584494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.584521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.584552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.584576 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.687433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.687494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.687513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.687543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.687566 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.736659 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:34:09.268272765 +0000 UTC Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.766316 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:42:58 crc kubenswrapper[4770]: E0126 18:42:58.766609 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.790266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.790336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.790360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.790390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.790414 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.893089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.893138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.893147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.893163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.893173 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.995742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.996015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.996112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.996213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:58 crc kubenswrapper[4770]: I0126 18:42:58.996330 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:58Z","lastTransitionTime":"2026-01-26T18:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.099295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.099594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.099766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.099907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.100051 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.201735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.201777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.201807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.201824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.201833 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.304266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.304306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.304317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.304333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.304346 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.407171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.407215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.407233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.407248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.407258 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.510305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.510356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.510367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.510387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.510407 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.612892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.612928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.612939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.612953 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.612964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.715274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.715358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.715375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.715395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.715412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.737612 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:23:21.910803259 +0000 UTC Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.766889 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.766964 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.766997 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.767301 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.767660 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.767730 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.767965 4770 scope.go:117] "RemoveContainer" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.768190 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.817599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.817729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.817815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.817892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.817962 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.906763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.906816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.906834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.906859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.906876 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.927989 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.931351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.931380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.931392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.931410 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.931423 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.947632 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.951822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.951859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.951870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.951883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.951893 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.967503 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.970768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.970899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.970965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.971048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.971112 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:42:59 crc kubenswrapper[4770]: E0126 18:42:59.983212 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:42:59Z is after 2025-08-24T17:21:41Z" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.986580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.986752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.986820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.986889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:42:59 crc kubenswrapper[4770]: I0126 18:42:59.986974 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:42:59Z","lastTransitionTime":"2026-01-26T18:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: E0126 18:43:00.002862 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:00Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:00 crc kubenswrapper[4770]: E0126 18:43:00.003235 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.004345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.004442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.004519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.004727 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.005050 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.107206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.107251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.107262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.107277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.107286 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.209759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.209817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.209830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.209847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.209858 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.312051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.312091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.312101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.312114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.312124 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.414514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.414572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.414590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.414617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.414635 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.516973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.517010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.517023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.517041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.517067 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.620102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.620164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.620188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.620213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.620229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.722951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.722986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.722996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.723013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.723024 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.738157 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:43:16.095169051 +0000 UTC Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.766286 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:00 crc kubenswrapper[4770]: E0126 18:43:00.766655 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.826154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.826189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.826201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.826216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.826226 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.929064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.929123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.929143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.929166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:00 crc kubenswrapper[4770]: I0126 18:43:00.929185 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:00Z","lastTransitionTime":"2026-01-26T18:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.032261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.032561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.032642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.032765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.032867 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.135898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.135949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.135965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.135987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.136004 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.238210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.238276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.238294 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.238319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.238338 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.345075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.345127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.345140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.345158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.345174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.447376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.447420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.447431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.447447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.447460 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.549797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.549827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.549835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.549857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.549865 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.652362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.652389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.652446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.652460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.652470 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.738425 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:59:13.420406344 +0000 UTC Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.755488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.755539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.755554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.755571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.755582 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.766028 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.766085 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.766087 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:01 crc kubenswrapper[4770]: E0126 18:43:01.766178 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:01 crc kubenswrapper[4770]: E0126 18:43:01.766303 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:01 crc kubenswrapper[4770]: E0126 18:43:01.766420 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.858494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.858550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.858568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.858593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.858613 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.961390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.961452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.961468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.961497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:01 crc kubenswrapper[4770]: I0126 18:43:01.961513 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:01Z","lastTransitionTime":"2026-01-26T18:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.064891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.064966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.064978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.064999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.065010 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.167658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.167759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.167777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.167802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.167820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.270377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.270429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.270441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.270459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.270474 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.372969 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.373020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.373040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.373063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.373079 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.475347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.475388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.475396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.475409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.475419 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.577378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.577421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.577433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.577454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.577468 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.680175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.680211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.680225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.680241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.680256 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.739173 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:46:36.346997102 +0000 UTC Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.766378 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:02 crc kubenswrapper[4770]: E0126 18:43:02.766535 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.782664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.782729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.782743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.782757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.782769 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.885612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.885665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.885681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.885728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.885746 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.988373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.988434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.988450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.988473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:02 crc kubenswrapper[4770]: I0126 18:43:02.988495 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:02Z","lastTransitionTime":"2026-01-26T18:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.090634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.090675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.090685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.090717 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.090729 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.190210 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/0.log" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.190293 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf1d4063-db34-411a-bdbc-3736acf7f126" containerID="4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499" exitCode=1 Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.190351 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerDied","Data":"4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.191031 4770 scope.go:117] "RemoveContainer" containerID="4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.192555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.192584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.192592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.192607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.192615 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.223113 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.236329 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.239485 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:03 crc kubenswrapper[4770]: E0126 18:43:03.239662 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:43:03 crc kubenswrapper[4770]: E0126 18:43:03.239734 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:43:35.239717694 +0000 UTC m=+99.804624426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.264257 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.280564 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.295094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.295123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.295134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.295150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.295162 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.297130 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.315230 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.330188 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.345178 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.358814 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.374333 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.390809 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.397133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.397176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.397186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.397202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.397212 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.405148 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.418279 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.428756 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.443298 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.456816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.476525 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.492894 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:03Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.499835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.499905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.499922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.499948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.499965 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.602851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.602904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.602919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.602937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.602948 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.705765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.705811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.705821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.705841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.705852 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.740190 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:28:47.505590858 +0000 UTC Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.766764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.766802 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.766862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:03 crc kubenswrapper[4770]: E0126 18:43:03.766885 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:03 crc kubenswrapper[4770]: E0126 18:43:03.767129 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:03 crc kubenswrapper[4770]: E0126 18:43:03.767225 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.807681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.807773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.807791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.807822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.807840 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.910664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.910738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.910753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.910797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:03 crc kubenswrapper[4770]: I0126 18:43:03.910810 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:03Z","lastTransitionTime":"2026-01-26T18:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.013304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.013366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.013383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.013407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.013428 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.116464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.116511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.116526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.116544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.116557 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.194805 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/0.log" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.194858 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerStarted","Data":"7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.216638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.219041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.219093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.219112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.219135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.219153 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.232899 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.248979 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.273452 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.286406 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.301662 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.321527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.321567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.321576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.321592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.321603 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.324128 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.341688 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.353005 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.365042 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.380152 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.404350 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.417487 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.423735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.423770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.423778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.423791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.423800 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.429628 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.444083 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.458417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.471576 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.484760 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:04Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.526366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.526442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.526465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.526495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.526517 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.629529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.629566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.629574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.629586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.629594 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.732601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.732640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.732649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.732662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.732671 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.741092 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:12:00.432640309 +0000 UTC Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.766891 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:04 crc kubenswrapper[4770]: E0126 18:43:04.767050 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.834612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.834686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.834728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.834753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.834769 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.937200 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.937243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.937252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.937268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:04 crc kubenswrapper[4770]: I0126 18:43:04.937277 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:04Z","lastTransitionTime":"2026-01-26T18:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.039550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.039591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.039601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.039618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.039629 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.142287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.142327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.142338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.142354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.142364 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.245097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.245182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.245206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.245236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.245261 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.348123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.348186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.348201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.348218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.348229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.450659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.450731 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.450746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.450764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.450775 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.553489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.553533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.553548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.553563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.553575 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.656780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.656837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.656855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.656879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.656895 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.742037 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:34:35.587281579 +0000 UTC Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.759301 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.759354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.759362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.759377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.759385 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.766678 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.766743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.766743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:05 crc kubenswrapper[4770]: E0126 18:43:05.768034 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:05 crc kubenswrapper[4770]: E0126 18:43:05.768111 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:05 crc kubenswrapper[4770]: E0126 18:43:05.768187 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.781211 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.793551 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.808247 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.823461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.840790 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.861295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.861320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.861330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.861342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.861351 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.864829 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.879356 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.897838 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.909691 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.923312 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.944444 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.957936 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.963598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.963642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.963659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.963680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.963723 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:05Z","lastTransitionTime":"2026-01-26T18:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.975052 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:05 crc kubenswrapper[4770]: I0126 18:43:05.991556 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:05Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.005911 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.021217 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.040886 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.051951 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:06Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.066115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.066140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.066153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.066169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.066179 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.168186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.168213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.168221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.168233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.168242 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.270810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.270866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.270879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.270903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.270920 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.374170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.374300 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.374330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.374363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.374387 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.476158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.476223 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.476270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.476287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.476297 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.578470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.578512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.578520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.578533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.578543 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.680580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.680625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.680635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.680651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.680664 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.742882 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:06:42.096956341 +0000 UTC Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.769021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:06 crc kubenswrapper[4770]: E0126 18:43:06.769280 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.783019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.783054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.783064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.783078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.783086 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.886511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.886558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.886567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.886581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.886590 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.988498 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.988543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.988554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.988571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:06 crc kubenswrapper[4770]: I0126 18:43:06.988583 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:06Z","lastTransitionTime":"2026-01-26T18:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.090528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.090580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.090594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.090615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.090627 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.192806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.192865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.192881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.192911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.192932 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.295482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.295521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.295529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.295542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.295551 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.397968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.398029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.398046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.398070 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.398086 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.501605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.501642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.501650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.501663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.501673 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.604291 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.604348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.604367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.604392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.604409 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.706414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.706437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.706444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.706456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.706465 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.743986 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:57:34.77593571 +0000 UTC Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.766828 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.766867 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:07 crc kubenswrapper[4770]: E0126 18:43:07.766959 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.766832 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:07 crc kubenswrapper[4770]: E0126 18:43:07.767079 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:07 crc kubenswrapper[4770]: E0126 18:43:07.767131 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.808652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.808721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.808732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.808751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.808762 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.911264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.911304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.911315 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.911333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:07 crc kubenswrapper[4770]: I0126 18:43:07.911343 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:07Z","lastTransitionTime":"2026-01-26T18:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.014040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.014080 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.014089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.014104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.014114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.116399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.116441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.116451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.116464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.116472 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.219490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.219553 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.219567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.219584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.219595 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.322018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.322058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.322069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.322084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.322095 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.424286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.424339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.424356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.424379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.424406 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.527197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.527265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.527291 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.527319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.527340 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.629967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.630044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.630070 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.630099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.630120 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.732773 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.732855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.732892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.732924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.732944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.744130 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:45:19.675461693 +0000 UTC Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.766474 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:08 crc kubenswrapper[4770]: E0126 18:43:08.766638 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.835554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.835609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.835629 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.835651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.835670 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.937927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.937977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.937992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.938011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:08 crc kubenswrapper[4770]: I0126 18:43:08.938025 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:08Z","lastTransitionTime":"2026-01-26T18:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.040106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.040181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.040201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.040225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.040241 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.142974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.143046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.143065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.143088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.143106 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.246346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.246394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.246411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.246433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.246451 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.349656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.349811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.349837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.349866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.349887 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.452799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.452863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.452886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.452914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.452935 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.555680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.555786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.555804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.555829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.555846 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.658972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.659021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.659033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.659047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.659057 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.744852 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:45:55.919883999 +0000 UTC Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.762825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.762892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.762908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.762929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.762942 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.766161 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.766187 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:09 crc kubenswrapper[4770]: E0126 18:43:09.766337 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.766187 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:09 crc kubenswrapper[4770]: E0126 18:43:09.766387 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:09 crc kubenswrapper[4770]: E0126 18:43:09.766469 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.865978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.866020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.866032 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.866048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.866059 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.969022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.969088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.969106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.969130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:09 crc kubenswrapper[4770]: I0126 18:43:09.969147 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:09Z","lastTransitionTime":"2026-01-26T18:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.072287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.072353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.072370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.072393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.072409 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.174792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.174848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.174867 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.174888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.174912 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.277414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.277482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.277505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.277535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.277557 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.374141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.374204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.374217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.374257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.374272 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.393969 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.399430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.399521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.399539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.399560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.399582 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.419296 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.423284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.423330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.423338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.423352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.423361 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.441962 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.446818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.446868 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.446886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.446910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.446929 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.465492 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.469767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.469813 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.469847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.469866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.469877 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.493391 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:10Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.493576 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.495735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.495840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.495860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.495886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.495904 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.598688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.598875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.598890 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.598904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.598915 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.702041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.702123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.702145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.702172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.702195 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.745425 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:12:22.742985199 +0000 UTC Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.766783 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:10 crc kubenswrapper[4770]: E0126 18:43:10.766958 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.782156 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.806161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.806258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.806277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.806305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.806322 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.909564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.909642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.909666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.909898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:10 crc kubenswrapper[4770]: I0126 18:43:10.909945 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:10Z","lastTransitionTime":"2026-01-26T18:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.012568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.012624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.012641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.012663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.012680 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.115727 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.115758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.115769 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.115784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.115795 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.217822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.217878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.217894 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.217916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.217932 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.320565 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.320641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.320666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.320730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.320753 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.424217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.424278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.424295 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.424320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.424340 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.528076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.528140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.528159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.528183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.528201 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.630981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.631267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.631288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.631309 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.631327 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.734120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.734176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.734191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.734213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.734228 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.746069 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:46:39.022307876 +0000 UTC Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.766436 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.766473 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.766548 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:11 crc kubenswrapper[4770]: E0126 18:43:11.766664 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:11 crc kubenswrapper[4770]: E0126 18:43:11.766790 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:11 crc kubenswrapper[4770]: E0126 18:43:11.766901 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.836764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.836827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.836853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.836884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.836907 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.940877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.940940 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.940959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.940982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:11 crc kubenswrapper[4770]: I0126 18:43:11.940999 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:11Z","lastTransitionTime":"2026-01-26T18:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.043655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.043733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.043750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.043774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.043791 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.146542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.146596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.146607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.146624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.146636 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.250079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.250233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.250256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.250285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.250309 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.357737 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.357805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.357829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.357859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.357882 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.460558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.460592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.460606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.460623 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.460635 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.563369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.563434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.563443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.563458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.563468 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.666880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.666947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.666965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.666993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.667010 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.746656 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:51:48.531600669 +0000 UTC Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.766887 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:12 crc kubenswrapper[4770]: E0126 18:43:12.767073 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.769781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.769847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.769874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.769901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.769924 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.873269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.873328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.873341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.873360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.873374 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.976123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.976263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.976289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.976317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:12 crc kubenswrapper[4770]: I0126 18:43:12.976337 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:12Z","lastTransitionTime":"2026-01-26T18:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.079011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.079078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.079096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.079124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.079145 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.182514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.182566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.182585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.182609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.182626 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.285349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.285402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.285419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.285440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.285457 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.389154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.389224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.389249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.389278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.389299 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.492848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.492905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.492927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.492954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.492976 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.596748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.596812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.596831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.596860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.596883 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.700369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.700441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.700462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.700490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.700511 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.747187 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:05:22.397875672 +0000 UTC Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.766866 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.766917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:13 crc kubenswrapper[4770]: E0126 18:43:13.767022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.767140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:13 crc kubenswrapper[4770]: E0126 18:43:13.767284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:13 crc kubenswrapper[4770]: E0126 18:43:13.767588 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.803154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.803207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.803219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.803235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.803247 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.905812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.905847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.905859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.905874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:13 crc kubenswrapper[4770]: I0126 18:43:13.905885 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:13Z","lastTransitionTime":"2026-01-26T18:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.008326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.008432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.008450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.008479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.008498 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.111217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.111271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.111288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.111312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.111329 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.214936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.215321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.215524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.215787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.215998 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.319328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.319403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.319422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.319447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.319465 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.422774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.422812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.422821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.422835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.422846 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.525960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.526010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.526022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.526037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.526048 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.629360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.629406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.629424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.629449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.629467 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.735610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.735665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.735680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.735726 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.735738 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.748270 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:55:34.206191174 +0000 UTC Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.766151 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:14 crc kubenswrapper[4770]: E0126 18:43:14.766385 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.767434 4770 scope.go:117] "RemoveContainer" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.837785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.837846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.837863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.837886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.837903 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.940778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.940824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.940836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.940852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:14 crc kubenswrapper[4770]: I0126 18:43:14.940864 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:14Z","lastTransitionTime":"2026-01-26T18:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.043973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.044056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.044079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.044109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.044131 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.147608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.147668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.147686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.147744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.147767 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.251528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.251607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.251634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.251663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.251684 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.355276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.355413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.355433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.355461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.355478 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.458875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.458927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.458943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.458966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.458984 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.562516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.562588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.562609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.562637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.562661 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.667260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.667327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.667349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.667379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.667403 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.750583 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:18:39.653571449 +0000 UTC Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.767016 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.767093 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:15 crc kubenswrapper[4770]: E0126 18:43:15.767143 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:15 crc kubenswrapper[4770]: E0126 18:43:15.767229 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.767373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:15 crc kubenswrapper[4770]: E0126 18:43:15.767445 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.772582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.772610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.772619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.772630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.772639 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.806111 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.816139 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.827477 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.843289 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.855002 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.865919 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.874671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.874711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.874719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.874731 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.874739 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.877948 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.889832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.900302 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.914074 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.926294 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.939539 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.948245 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.958031 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.969973 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.982113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.982143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.982156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.982170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.982234 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:15Z","lastTransitionTime":"2026-01-26T18:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:15 crc kubenswrapper[4770]: I0126 18:43:15.987145 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.001120 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:15Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.012852 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.024402 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.084539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.084565 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.084572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.084584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.084593 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.187472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.187522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.187538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.187560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.187576 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.236427 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/2.log" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.239384 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.239817 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.253284 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.265679 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.277241 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.289655 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.290211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.290244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.290259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.290279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.290292 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.302921 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.331617 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.347316 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.359477 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.369203 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.379249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.391244 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.392825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.392860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.392872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.392889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.392901 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.402471 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.414755 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.426006 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.445527 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.456067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.469033 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.488238 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.495056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.495083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.495096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.495112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.495121 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.499243 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:16Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.598030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.598073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.598084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.598100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.598112 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.702009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.702093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.702119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.702155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.702193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.751486 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:09:12.711741865 +0000 UTC Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.767203 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:16 crc kubenswrapper[4770]: E0126 18:43:16.767544 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.806341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.806394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.806417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.806444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.806466 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.908913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.908974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.908991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.909014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:16 crc kubenswrapper[4770]: I0126 18:43:16.909034 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:16Z","lastTransitionTime":"2026-01-26T18:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.011610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.011681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.011736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.011762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.011779 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.113994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.114034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.114046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.114060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.114071 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.217457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.217529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.217549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.217573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.217591 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.245511 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/3.log" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.246644 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/2.log" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.251532 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" exitCode=1 Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.251575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.251617 4770 scope.go:117] "RemoveContainer" containerID="d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.253028 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:43:17 crc kubenswrapper[4770]: E0126 18:43:17.253366 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.271584 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.291319 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.311383 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.320253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.320323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.320335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.320354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.320366 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.329005 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.341809 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.370622 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.387403 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.402053 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.424098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.424165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.424184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.424210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.424228 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.433103 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7058d67016f485ad76d276a9aee6c80dd30dfcc409735a18e49d586010cdde6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:42:43Z\\\",\\\"message\\\":\\\"126 18:42:42.800340 6388 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0126 18:42:42.799888 6388 services_controller.go:434] Service openshift-machine-api/machine-api-controllers retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{machine-api-controllers openshift-machine-api 1cbb1d8a-02ea-4ab8-8f79-4dee9d158847 6869 0 2025-02-23 05:27:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[k8s-app:controller] map[capability.openshift.io/name:MachineAPI exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:machine-api-controllers-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0074b3b5b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:machine-mtrc,Protocol:TCP,Port:8441,TargetPort:{1 0 machine-mtrc},NodePort:0,AppProtocol:nil,},ServicePort{Name:machineset-mtrc,Protocol:TCP,Port:8442,TargetPort:{1 0 machineset-mtrc},NodePort:0,AppProtocol:nil,},Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:16Z\\\",\\\"message\\\":\\\"3] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 18:43:16.518202 6796 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:43:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.448315 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.465638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.483303 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.501224 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.521179 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.527960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.528012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.528025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.528044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.528056 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.540074 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.559287 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.581321 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.597456 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.610186 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:17Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.631491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.631532 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.631542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.631560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.631572 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.733472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.733523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.733538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.733560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.733575 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.751936 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:12:21.784820737 +0000 UTC Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.766722 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.766780 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.766834 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:17 crc kubenswrapper[4770]: E0126 18:43:17.766885 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:17 crc kubenswrapper[4770]: E0126 18:43:17.766976 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:17 crc kubenswrapper[4770]: E0126 18:43:17.767083 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.836819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.836880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.836892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.836912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.836927 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.940772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.940822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.940841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.940864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:17 crc kubenswrapper[4770]: I0126 18:43:17.940881 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:17Z","lastTransitionTime":"2026-01-26T18:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.044844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.044906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.044933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.044961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.044980 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.147732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.147765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.147774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.147786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.147794 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.250944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.251021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.251042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.251071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.251090 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.258957 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/3.log" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.264914 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:43:18 crc kubenswrapper[4770]: E0126 18:43:18.265245 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.284615 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.300760 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.322303 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.344261 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.364428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.364520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.364549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.364583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.364608 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.366571 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.383817 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.405901 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.425249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.437976 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.455015 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.467674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.467806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.467825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.467847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.467866 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.474322 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.499192 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.520149 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.542752 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.570874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.570968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.570988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.571012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.571030 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.574868 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:16Z\\\",\\\"message\\\":\\\"3] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 18:43:16.518202 6796 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.592939 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.610616 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.651243 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.671540 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:18Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.673859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.673964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.673990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.674172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.674284 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.752655 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:31:43.057973146 +0000 UTC Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.767016 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:18 crc kubenswrapper[4770]: E0126 18:43:18.767192 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.777016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.777073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.777098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.777125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.777144 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.879930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.879994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.880017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.880048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.880070 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.982985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.983027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.983037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.983054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:18 crc kubenswrapper[4770]: I0126 18:43:18.983066 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:18Z","lastTransitionTime":"2026-01-26T18:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.086528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.086615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.086640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.086668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.086692 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.189429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.189499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.189526 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.189554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.189576 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.292785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.292839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.292888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.292918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.292940 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.395587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.395648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.395667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.395692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.395762 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.498889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.498946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.498963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.498987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.499006 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.602025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.602082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.602099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.602127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.602167 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.615584 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.615869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.615895 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.61585474 +0000 UTC m=+148.180761512 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.615948 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.616041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.616101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616116 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616201 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.616181009 +0000 UTC m=+148.181087781 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616250 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616279 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616291 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616338 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616344 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616353 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616298 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616445 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.616415595 +0000 UTC m=+148.181322377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616541 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.616525148 +0000 UTC m=+148.181431910 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.616564 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.616552339 +0000 UTC m=+148.181459111 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.705054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.705091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.705103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.705121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.705133 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.753918 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:48:02.324908196 +0000 UTC Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.766383 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.766465 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.766540 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.766410 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.766672 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:19 crc kubenswrapper[4770]: E0126 18:43:19.766782 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.809475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.809538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.809562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.809593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.809618 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.911761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.911812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.911828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.911851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:19 crc kubenswrapper[4770]: I0126 18:43:19.911869 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:19Z","lastTransitionTime":"2026-01-26T18:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.015017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.015060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.015071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.015086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.015100 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.117558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.117627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.117644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.117669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.117685 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.220325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.220398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.220421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.220449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.220470 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.323219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.323255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.323267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.323284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.323295 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.426076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.426136 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.426159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.426186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.426207 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.529818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.529904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.529926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.529955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.529978 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.573507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.573575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.573596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.573624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.573644 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.593724 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.599396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.599468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.599484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.599501 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.599513 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.618749 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.624211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.624260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.624276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.624299 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.624315 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.643104 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.647455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.647507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.647519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.647536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.647549 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.663085 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.668496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.668561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.668578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.668606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.668624 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.687405 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:20Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.687736 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.689838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.689900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.689919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.689945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.689961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.754773 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:50:35.14620025 +0000 UTC Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.766044 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:20 crc kubenswrapper[4770]: E0126 18:43:20.766157 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.792044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.792063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.792071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.792081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.792088 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.894305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.894357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.894371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.894393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.894408 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.997323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.997380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.997404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.997432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:20 crc kubenswrapper[4770]: I0126 18:43:20.997456 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:20Z","lastTransitionTime":"2026-01-26T18:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.099973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.100576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.100585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.100599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.100608 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.202551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.202578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.202587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.202599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.202607 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.304747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.304816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.304834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.304859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.304877 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.406691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.406792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.406805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.406822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.406834 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.509399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.509461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.509482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.509506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.509525 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.612131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.612216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.612233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.612256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.612273 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.714943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.714984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.714994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.715009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.715017 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.755024 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:42:50.918521314 +0000 UTC Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.766460 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.766506 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.766507 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:21 crc kubenswrapper[4770]: E0126 18:43:21.766598 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:21 crc kubenswrapper[4770]: E0126 18:43:21.766770 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:21 crc kubenswrapper[4770]: E0126 18:43:21.766948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.818289 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.818357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.818380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.818411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.818436 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.921239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.921282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.921296 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.921316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:21 crc kubenswrapper[4770]: I0126 18:43:21.921332 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:21Z","lastTransitionTime":"2026-01-26T18:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.024050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.024131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.024157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.024189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.024213 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.127468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.127834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.127872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.127906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.127933 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.231879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.231934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.231949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.231971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.231986 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.335012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.335130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.335151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.335175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.335193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.438674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.438757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.438776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.438797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.438812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.541844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.541911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.541928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.541947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.541959 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.644649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.644754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.644769 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.644789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.644803 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.747429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.747476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.747491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.747514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.747531 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.755466 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:47:02.719009673 +0000 UTC Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.766614 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:22 crc kubenswrapper[4770]: E0126 18:43:22.766801 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.850831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.850915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.850940 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.850970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.850992 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.954190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.954472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.954485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.954503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:22 crc kubenswrapper[4770]: I0126 18:43:22.954515 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:22Z","lastTransitionTime":"2026-01-26T18:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.057096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.057137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.057146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.057159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.057168 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.159103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.159167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.159186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.159210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.159228 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.262016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.262067 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.262078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.262097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.262111 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.364870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.364939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.364957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.364979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.364996 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.467981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.468050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.468068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.468111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.468128 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.570627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.570658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.570668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.570684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.570715 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.673377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.673417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.673431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.673447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.673459 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.756215 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:50:53.041342325 +0000 UTC Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.766776 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.766830 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.766974 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:23 crc kubenswrapper[4770]: E0126 18:43:23.766971 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:23 crc kubenswrapper[4770]: E0126 18:43:23.767099 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:23 crc kubenswrapper[4770]: E0126 18:43:23.767184 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.775638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.775752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.775829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.775860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.775877 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.878059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.878124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.878146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.878168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.878185 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.980658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.980755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.980772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.980795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:23 crc kubenswrapper[4770]: I0126 18:43:23.980812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:23Z","lastTransitionTime":"2026-01-26T18:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.083334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.083413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.083430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.083454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.083475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.186412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.186465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.186476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.186492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.186504 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.328144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.328191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.328206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.328228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.328244 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.431056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.431128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.431147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.431173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.431194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.533860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.533934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.533957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.533987 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.534012 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.636838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.636882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.636894 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.636911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.636923 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.740525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.740584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.740607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.740635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.740772 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.756901 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:50:49.85170589 +0000 UTC Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.766493 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:24 crc kubenswrapper[4770]: E0126 18:43:24.766875 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.844293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.844361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.844385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.844414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.844441 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.947372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.947427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.947445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.947471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:24 crc kubenswrapper[4770]: I0126 18:43:24.947495 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:24Z","lastTransitionTime":"2026-01-26T18:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.051154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.051213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.051229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.051253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.051272 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.154690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.154781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.154799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.154822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.154838 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.257869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.257922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.257938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.257962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.257977 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.360359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.360442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.360475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.360505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.360528 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.463420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.463474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.463490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.463512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.463529 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.566579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.566653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.566676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.566748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.566775 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.670368 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.670441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.670466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.670495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.670516 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.758065 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:50:29.649840587 +0000 UTC Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.766751 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.766761 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:25 crc kubenswrapper[4770]: E0126 18:43:25.766910 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:25 crc kubenswrapper[4770]: E0126 18:43:25.767089 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.767176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:25 crc kubenswrapper[4770]: E0126 18:43:25.767338 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.773232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.773303 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.773322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.773346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.773365 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.787199 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.810638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.823779 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.839508 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.857317 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.876831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.876869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.876879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.876895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.876906 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.878583 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.893205 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.907122 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.929536 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:16Z\\\",\\\"message\\\":\\\"3] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 18:43:16.518202 6796 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.942110 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.953595 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.974819 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.978793 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.978836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.978851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.978869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.978885 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:25Z","lastTransitionTime":"2026-01-26T18:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:25 crc kubenswrapper[4770]: I0126 18:43:25.991006 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:25Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.004691 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.017533 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.030412 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.047656 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.065141 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.081531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.081568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.081639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.081659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.081693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.082666 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:26Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.184921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.185036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.185059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.185137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.185159 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.287879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.287938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.287955 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.287998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.288015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.390062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.390138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.390160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.390188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.390211 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.492807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.492860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.492876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.492897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.492913 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.596288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.596341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.596362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.596385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.596404 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.699324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.699409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.699429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.699458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.699478 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.759122 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:52:14.367898507 +0000 UTC Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.766133 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:26 crc kubenswrapper[4770]: E0126 18:43:26.766313 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.802539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.802626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.802660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.802738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.802769 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.905873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.905926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.905945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.905969 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:26 crc kubenswrapper[4770]: I0126 18:43:26.905984 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:26Z","lastTransitionTime":"2026-01-26T18:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.008397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.008453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.008476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.008494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.008507 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.111153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.111215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.111236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.111267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.111290 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.213561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.213634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.213670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.213734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.213760 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.316286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.316326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.316338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.316353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.316363 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.419599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.419682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.419740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.419770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.419790 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.523093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.523176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.523196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.523221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.523240 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.625383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.625446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.625463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.625487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.625505 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.729931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.729969 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.729979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.729995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.730006 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.760105 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:39:04.624644164 +0000 UTC Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.766088 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.766121 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.766097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:27 crc kubenswrapper[4770]: E0126 18:43:27.766265 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:27 crc kubenswrapper[4770]: E0126 18:43:27.766431 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:27 crc kubenswrapper[4770]: E0126 18:43:27.766584 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.833135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.833196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.833214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.833239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.833256 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.936019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.936095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.936114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.936138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:27 crc kubenswrapper[4770]: I0126 18:43:27.936157 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:27Z","lastTransitionTime":"2026-01-26T18:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.039658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.039739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.039752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.039768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.039780 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.141606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.141652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.141665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.141680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.141692 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.244201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.244251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.244266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.244286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.244301 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.347193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.347345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.347382 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.347415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.347440 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.451059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.451145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.451166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.451193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.451213 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.554167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.554252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.554276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.554305 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.554328 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.657357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.657456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.657507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.657528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.657543 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.760474 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:16:03.095828771 +0000 UTC Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.761014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.761069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.761090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.761117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.761135 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.766518 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:28 crc kubenswrapper[4770]: E0126 18:43:28.766784 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.864464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.864527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.864544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.864571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.864589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.968057 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.968103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.968148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.968171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:28 crc kubenswrapper[4770]: I0126 18:43:28.968181 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:28Z","lastTransitionTime":"2026-01-26T18:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.071429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.071549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.071568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.071594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.071612 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.174183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.174237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.174255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.174277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.174294 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.276553 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.276612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.276639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.276668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.276691 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.381168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.381233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.381256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.381284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.381307 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.483989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.484027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.484062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.484079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.484090 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.586634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.586670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.586679 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.586692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.586720 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.688904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.688967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.688989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.689017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.689039 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.761530 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:14:32.742248514 +0000 UTC Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.766987 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.767112 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:29 crc kubenswrapper[4770]: E0126 18:43:29.767181 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:29 crc kubenswrapper[4770]: E0126 18:43:29.767291 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.766987 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:29 crc kubenswrapper[4770]: E0126 18:43:29.767426 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.791508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.791566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.791583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.791606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.791623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.894238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.894304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.894323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.894347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.894365 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.997893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.997959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.997973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.997994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:29 crc kubenswrapper[4770]: I0126 18:43:29.998011 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:29Z","lastTransitionTime":"2026-01-26T18:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.100876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.100937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.100952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.100973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.100987 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.204172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.204228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.204246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.204273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.204290 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.307750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.307822 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.307846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.307876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.307898 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.410832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.410900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.410923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.410951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.410975 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.514645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.514751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.514777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.514809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.514835 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.617042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.617091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.617107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.617128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.617145 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.720499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.720539 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.720551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.720568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.720580 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.761684 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:44:29.882954334 +0000 UTC Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.766073 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:30 crc kubenswrapper[4770]: E0126 18:43:30.766231 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.822931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.822970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.822981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.822998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.823010 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.925383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.925447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.925466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.925491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:30 crc kubenswrapper[4770]: I0126 18:43:30.925508 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:30Z","lastTransitionTime":"2026-01-26T18:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.003667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.003758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.003780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.003808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.003829 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.021067 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.025257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.025299 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.025310 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.025327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.025339 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.036832 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.040009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.040066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.040085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.040114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.040129 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.050802 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.057656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.057752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.057770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.057836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.057861 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.069539 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.072691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.072751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.072760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.072774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.072786 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.084808 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e92cb904-8251-4c58-a8df-ec04634af33f\\\",\\\"systemUUID\\\":\\\"72c9bf02-a067-4dd0-b297-10816a0f4fa6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:31Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.084967 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.086781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.086821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.086833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.086849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.086862 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.189531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.189588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.189604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.189627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.189646 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.297007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.297082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.297099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.297123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.297143 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.400616 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.400664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.400678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.400725 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.400739 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.503220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.503268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.503284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.503307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.503324 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.605639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.605691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.605733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.605757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.605775 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.708608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.708677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.708740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.708761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.708777 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.762684 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:01:45.021192042 +0000 UTC Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.767332 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.767694 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.767764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.767906 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.768070 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.768279 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.769621 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:43:31 crc kubenswrapper[4770]: E0126 18:43:31.769977 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.811757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.811798 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.811808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.811824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.811834 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.914085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.914140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.914157 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.914177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:31 crc kubenswrapper[4770]: I0126 18:43:31.914192 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:31Z","lastTransitionTime":"2026-01-26T18:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.016760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.016828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.016845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.016869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.016885 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.119166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.119197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.119204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.119216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.119225 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.221908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.221977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.221994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.222017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.222035 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.324982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.325051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.325072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.325104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.325127 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.427993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.428053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.428069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.428104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.428125 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.531220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.531330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.531353 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.531379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.531397 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.635327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.635397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.635420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.635450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.635475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.738809 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.738906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.738925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.738948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.738964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.762924 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:15:08.060326341 +0000 UTC Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.766259 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:32 crc kubenswrapper[4770]: E0126 18:43:32.766576 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.842589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.842666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.842688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.842764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.842787 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.945404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.945447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.945457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.945475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:32 crc kubenswrapper[4770]: I0126 18:43:32.945484 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:32Z","lastTransitionTime":"2026-01-26T18:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.047753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.047833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.047855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.047886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.047906 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.151019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.151068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.151091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.151122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.151139 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.253995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.254025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.254033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.254044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.254055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.357427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.357516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.357544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.357573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.357595 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.460815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.460863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.460874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.460891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.460902 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.564172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.564237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.564262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.564286 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.564301 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.667760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.667824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.667847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.667879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.667901 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.763849 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 02:21:20.452931179 +0000 UTC Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.766267 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:33 crc kubenswrapper[4770]: E0126 18:43:33.766462 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.766507 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.766463 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:33 crc kubenswrapper[4770]: E0126 18:43:33.766853 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:33 crc kubenswrapper[4770]: E0126 18:43:33.766966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.770320 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.770377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.770391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.770410 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.770422 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.873288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.873369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.873387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.873407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.873420 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.975986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.976086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.976105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.976131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:33 crc kubenswrapper[4770]: I0126 18:43:33.976146 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:33Z","lastTransitionTime":"2026-01-26T18:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.079749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.079789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.079802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.079820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.079832 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.182280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.182314 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.182325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.182341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.182351 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.285251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.285293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.285304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.285319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.285330 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.387763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.387818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.387829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.387850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.387866 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.490934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.490970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.490981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.490997 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.491008 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.593222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.593261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.593271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.593285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.593296 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.696858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.696932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.696954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.696985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.697005 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.764316 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:16:05.76860963 +0000 UTC Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.766798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:34 crc kubenswrapper[4770]: E0126 18:43:34.767003 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.799897 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.799944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.799959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.799980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.799995 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.903003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.903098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.903111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.903128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:34 crc kubenswrapper[4770]: I0126 18:43:34.903168 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:34Z","lastTransitionTime":"2026-01-26T18:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.006173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.006218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.006229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.006247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.006259 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.109025 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.109120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.109147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.109176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.109197 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.212389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.212489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.212521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.212558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.212586 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.288576 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:35 crc kubenswrapper[4770]: E0126 18:43:35.288797 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:43:35 crc kubenswrapper[4770]: E0126 18:43:35.288903 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs podName:f836a816-01c1-448b-9736-c65a8f4f0044 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:39.288877365 +0000 UTC m=+163.853784117 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs") pod "network-metrics-daemon-bqfpk" (UID: "f836a816-01c1-448b-9736-c65a8f4f0044") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.315336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.315428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.315444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.315465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.315479 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.417916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.417975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.417990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.418010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.418025 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.520818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.520900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.520926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.520963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.520987 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.623869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.623927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.623944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.623967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.623990 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.727443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.727557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.727621 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.727660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.727684 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.765361 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:59:23.168800384 +0000 UTC Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.766812 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.766867 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.766924 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:35 crc kubenswrapper[4770]: E0126 18:43:35.767070 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:35 crc kubenswrapper[4770]: E0126 18:43:35.767136 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:35 crc kubenswrapper[4770]: E0126 18:43:35.766993 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.790891 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-f87gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf1d4063-db34-411a-bdbc-3736acf7f126\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:43:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:03Z\\\",\\\"message\\\":\\\"2026-01-26T18:42:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e\\\\n2026-01-26T18:42:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_55043e69-b2b0-40d8-9536-43f4518efd9e to /host/opt/cni/bin/\\\\n2026-01-26T18:42:18Z [verbose] multus-daemon started\\\\n2026-01-26T18:42:18Z [verbose] Readiness Indicator file check\\\\n2026-01-26T18:43:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:43:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rgvlm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-f87gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.827068 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49551d69-752c-4bcd-b265-d98a3ec92838\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T18:43:16Z\\\",\\\"message\\\":\\\"3] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"ba175bbe-5cc4-47e6-a32d-57693e1320bd\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.36\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0126 18:43:16.518202 6796 model_clien\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:43:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg8r7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lgvzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.831015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.831076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.831095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.831119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.831137 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.845029 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f836a816-01c1-448b-9736-c65a8f4f0044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ljwq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bqfpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.863294 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a6d8e81-0ee8-46d2-aa68-e1f2a6ecd9ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795795fcad582044039d1aa0be8059b315cea9e8596158c10a6fb2717fa04ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed8dfdb434d636948311a05eb2368e97d90a1d80759c0395e24c55ca03a6d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.884928 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e98fb9-95de-46bc-ac1f-f880afa0b2b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0876367ad653e7d9387072377ca107927310f0b2309a11c7c72d4c62ede8fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf332edd6608ab899233cff8ab8ff2edf94687707584b4e0cc1eba8739f7c452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://972cee01f130a7002bdd9b4de073afe37de202076c7c5799140490ca0465589c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://328a0833a6660f5865079e71f54664d98df70380a22ea501a9100d153624fae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ff13491d4b07ae5d2a868f8307337d162db6134867e21207087634091e355e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2a30e828eb8aa8b798d72e4c60c3ad9a8d20f0382715da9b203ebdf32d321e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebf0f20dc897904e28da34b9975c7bedecc296fb6a47e9688de8b49213aef35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ab8e52307bb965945c01094c8a420b6270242d0e9ad6a5f5c6abb22db89938a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.900732 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.912853 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6109a686-3ab2-465e-8a96-354f2ecbf491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bcfecfe1b95289f7367b78a0564fcc044ad242bbe4b132cbb9ff4e7a803aa2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cpmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nnf7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.928075 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d06408-0503-4a23-a417-dff17ebd0e1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ecfc46f0e9f46d05520c23221c6a6489ded70cac9910327e67221063050b7e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e611e06f258c6fc371b7286b03f53e1e8482f1a839c9ce336bda03a395252e83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8snm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5hkhm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.936168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.936198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.936208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.936224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.936236 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:35Z","lastTransitionTime":"2026-01-26T18:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.942169 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.957307 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://160c1256335ea831d513976b88d8c2135905f2882aec0ae102da92ff2ef7f548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b9fd3e7a5b05ab45d70f1b256d8455e8e294ebdfc53d7ba32ea80adef1bdb38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.972363 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.983644 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b6qql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b05a08e3-3ed4-479f-8b88-acf1d7868c9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6996a3dcb408a0119984bb516dc32a1cbd3138d813b7c560bfe2c85307e60d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jpw7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b6qql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:35 crc kubenswrapper[4770]: I0126 18:43:35.997008 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc090547-6c02-4c3f-9bef-bb8e2d266b88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9319e66be41872fd5577247d19b57e95b676c9b0822dceb406cef379e910f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08bb1c542fe72c69de001b0764daeb7402f7299a5d2ed98d6cc8c60654520092\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f7fd2e9bb1692408fdd62e4cc774dda79bd85b53b1b1c0ff253b87280da667\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:35Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.012890 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3521b6a0-1dc0-4a10-a8f5-fca1b2cde17e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://653e2ef84d6c22123bdd6f3b5891ddcf89cf33ab59d7297db1210fa343b878bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dab8702bc3da8d7d3fb04cd0bb8f0993cee145b5b593343d464d76d6c7791375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e8b2b6fafdde87c9406b3efffd98e7693c716e613f2559b93b488ec3c08087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7bbfc6398e555eb9279222b9438237af8c4641a133b20b13753be04644ebf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b756e19f28a0e0aaa26d5bf7dd572bfcddfb524e7aa562de4b8912761fd1b3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6094791f07299627300a27f7caa4bfa6952057dfb74cebb1d8e623833f5426e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d68246e727abcae666a4d8baf14ab9b2b42e83d4eb85035f0904441bdade43af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:42:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:42:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lng8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nf9ww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.025461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kk5wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21c84bb4-c720-4d18-bb93-908501f2f39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98769c5ec17409029efee24c6ddf717eac2a94841cf9551bdc10da5e3ed72bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97klc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:42:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kk5wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.038948 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369fbdfea9e21065f96859f73b6d916d0355b1e340f48c19d786d85ac9efca06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.039391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.039431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.039443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.039484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.039497 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.054631 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa55f16a-471b-44ef-8dc9-8217a63c0d69\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08bc0a9e9c2cae7330a0eb99d49024df47efef893c5de71a0de760226af46864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4839b78336b9e514f1260c286d51d6b72043666c5578f6b2a88d5796168192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9667cfbb52f6165dff16f485e89a0a85839a72528e35e3b926db5672ac48d49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8166937e9f370ef670489194e3284cf4bae866fd7bcc45390d3a038de5692d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.068501 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecd3a1f0-f0f8-44a5-9af2-11165831609e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T18:41:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T18:42:15Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 18:42:09.829619 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 18:42:09.835636 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-257111878/tls.crt::/tmp/serving-cert-257111878/tls.key\\\\\\\"\\\\nI0126 18:42:15.354416 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 18:42:15.360951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 18:42:15.360975 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 18:42:15.360995 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 18:42:15.361011 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 18:42:15.366409 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 18:42:15.366437 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366442 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 18:42:15.366447 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 18:42:15.366451 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 18:42:15.366454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 18:42:15.366459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 18:42:15.366621 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 18:42:15.367891 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:41:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T18:41:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T18:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T18:41:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.093462 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T18:42:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0846752cbd1e82943ff30b81ec03d6e3b6699ea7661535598d17d65547e09265\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T18:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T18:43:36Z is after 2025-08-24T17:21:41Z" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.142735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.142794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.142812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.142837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.142854 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.246191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.246268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.246288 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.246316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.246335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.349110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.349185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.349204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.349227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.349245 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.451827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.451876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.451891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.451912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.451929 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.555568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.555634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.555650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.555675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.555741 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.659250 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.659327 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.659343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.659361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.659379 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.763094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.763180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.763215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.763243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.763266 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.765507 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:00:29.187191107 +0000 UTC Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.766811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:36 crc kubenswrapper[4770]: E0126 18:43:36.767009 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.866161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.866234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.866247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.866266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.866302 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.969687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.969784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.969807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.969834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:36 crc kubenswrapper[4770]: I0126 18:43:36.969853 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:36Z","lastTransitionTime":"2026-01-26T18:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.073593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.073684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.073756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.073792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.073818 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.177261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.177317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.177335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.177373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.177405 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.279650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.279743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.279762 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.279791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.279813 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.383066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.383110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.383123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.383144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.383158 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.486031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.486075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.486086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.486102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.486116 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.589203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.589283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.589306 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.589331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.589355 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.692766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.692852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.692872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.692898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.692918 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.766661 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:07:17.93890588 +0000 UTC Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.767041 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:37 crc kubenswrapper[4770]: E0126 18:43:37.767197 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.767320 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.767437 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:37 crc kubenswrapper[4770]: E0126 18:43:37.767600 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:37 crc kubenswrapper[4770]: E0126 18:43:37.768202 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.796592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.796730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.796752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.796785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.796805 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.900336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.900389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.900407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.900432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:37 crc kubenswrapper[4770]: I0126 18:43:37.900449 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:37Z","lastTransitionTime":"2026-01-26T18:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.002494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.002547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.002564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.002585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.002601 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.105872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.105984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.106003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.106058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.106078 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.208298 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.208399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.208449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.208476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.208494 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.313198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.313265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.313276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.313293 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.313321 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.416817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.416898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.416914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.416931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.416944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.520855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.520922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.520933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.520968 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.520981 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.624814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.624891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.624915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.624947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.624965 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.727404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.727452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.727470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.727493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.727510 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.766303 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:38 crc kubenswrapper[4770]: E0126 18:43:38.766755 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.766834 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:21:31.91028908 +0000 UTC Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.830960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.831022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.831039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.831066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.831088 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.933947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.933998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.934012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.934031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:38 crc kubenswrapper[4770]: I0126 18:43:38.934046 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:38Z","lastTransitionTime":"2026-01-26T18:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.037506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.037566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.037585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.037613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.037632 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.141479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.141558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.141579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.141607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.141626 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.245927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.245976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.245989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.246006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.246018 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.348263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.348329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.348348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.348377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.348399 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.451323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.451409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.451434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.451467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.451490 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.555081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.555188 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.555239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.555263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.555283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.658909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.658975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.658993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.659020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.659042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.762115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.762179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.762207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.762235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.762255 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.766787 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.766822 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.766885 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:38:37.307653041 +0000 UTC Jan 26 18:43:39 crc kubenswrapper[4770]: E0126 18:43:39.766973 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.767017 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:39 crc kubenswrapper[4770]: E0126 18:43:39.767363 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:39 crc kubenswrapper[4770]: E0126 18:43:39.767484 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.865137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.865186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.865200 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.865227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.865239 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.968906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.968959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.968976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.968997 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:39 crc kubenswrapper[4770]: I0126 18:43:39.969011 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:39Z","lastTransitionTime":"2026-01-26T18:43:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.072338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.072386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.072398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.072416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.072428 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.175612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.175681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.175739 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.175775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.175798 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.277794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.277846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.277859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.277884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.277910 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.380862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.380914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.380951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.380970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.380983 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.483652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.483729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.483742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.483760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.483772 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.586380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.586436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.586451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.586471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.586485 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.688372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.688430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.688438 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.688469 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.688480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.766981 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:40 crc kubenswrapper[4770]: E0126 18:43:40.767196 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.767240 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:00:33.089750509 +0000 UTC Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.791214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.791330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.791349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.791370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.791385 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.894924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.894983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.894994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.895014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.895027 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.998647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.998777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.998795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.998826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:40 crc kubenswrapper[4770]: I0126 18:43:40.998844 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:40Z","lastTransitionTime":"2026-01-26T18:43:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.101956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.102019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.102037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.102097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.102116 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:41Z","lastTransitionTime":"2026-01-26T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.142644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.142797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.142825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.142855 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.142879 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T18:43:41Z","lastTransitionTime":"2026-01-26T18:43:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.211827 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52"] Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.212337 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.214220 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.214484 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.215115 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.215944 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.260951 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef5ffe42-9809-4e30-9058-28f9d34c2904-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.261001 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.261053 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5ffe42-9809-4e30-9058-28f9d34c2904-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.261107 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.261144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5ffe42-9809-4e30-9058-28f9d34c2904-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.281032 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-f87gd" podStartSLOduration=85.281015593 podStartE2EDuration="1m25.281015593s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.239391621 +0000 UTC m=+105.804298353" watchObservedRunningTime="2026-01-26 18:43:41.281015593 +0000 UTC m=+105.845922325" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.321835 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.321816483 podStartE2EDuration="31.321816483s" podCreationTimestamp="2026-01-26 18:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.321519835 +0000 UTC m=+105.886426577" watchObservedRunningTime="2026-01-26 18:43:41.321816483 +0000 UTC m=+105.886723215" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.355349 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.355330899 podStartE2EDuration="1m26.355330899s" podCreationTimestamp="2026-01-26 18:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.354671981 +0000 UTC m=+105.919578723" watchObservedRunningTime="2026-01-26 18:43:41.355330899 +0000 UTC m=+105.920237641" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361793 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5ffe42-9809-4e30-9058-28f9d34c2904-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef5ffe42-9809-4e30-9058-28f9d34c2904-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361948 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.361971 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5ffe42-9809-4e30-9058-28f9d34c2904-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.362039 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ef5ffe42-9809-4e30-9058-28f9d34c2904-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.363034 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef5ffe42-9809-4e30-9058-28f9d34c2904-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.368604 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5ffe42-9809-4e30-9058-28f9d34c2904-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.380380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef5ffe42-9809-4e30-9058-28f9d34c2904-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2bh52\" (UID: \"ef5ffe42-9809-4e30-9058-28f9d34c2904\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.394003 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podStartSLOduration=85.393979122 podStartE2EDuration="1m25.393979122s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.390805776 +0000 UTC m=+105.955712518" watchObservedRunningTime="2026-01-26 18:43:41.393979122 +0000 UTC m=+105.958885854" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.411517 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5hkhm" podStartSLOduration=84.411495089 podStartE2EDuration="1m24.411495089s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.410617615 +0000 UTC m=+105.975524347" watchObservedRunningTime="2026-01-26 18:43:41.411495089 +0000 UTC m=+105.976401861" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.488189 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.488171458 podStartE2EDuration="1m25.488171458s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.487077538 +0000 UTC m=+106.051984280" watchObservedRunningTime="2026-01-26 18:43:41.488171458 +0000 UTC m=+106.053078190" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.488286 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-b6qql" podStartSLOduration=85.488283381 podStartE2EDuration="1m25.488283381s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.473763453 +0000 UTC m=+106.038670205" watchObservedRunningTime="2026-01-26 18:43:41.488283381 +0000 UTC m=+106.053190113" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.514350 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kk5wm" podStartSLOduration=85.514330537 podStartE2EDuration="1m25.514330537s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.514052389 +0000 UTC m=+106.078959131" watchObservedRunningTime="2026-01-26 18:43:41.514330537 +0000 UTC m=+106.079237269" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.515117 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nf9ww" podStartSLOduration=85.515109857 podStartE2EDuration="1m25.515109857s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.504556416 +0000 UTC m=+106.069463158" watchObservedRunningTime="2026-01-26 18:43:41.515109857 +0000 UTC m=+106.080016589" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.540256 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.545672 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.545658284 podStartE2EDuration="53.545658284s" podCreationTimestamp="2026-01-26 18:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.544975805 +0000 UTC m=+106.109882557" watchObservedRunningTime="2026-01-26 18:43:41.545658284 +0000 UTC m=+106.110565016" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.588280 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.588262401 podStartE2EDuration="1m26.588262401s" podCreationTimestamp="2026-01-26 18:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:41.572267144 +0000 UTC m=+106.137173876" watchObservedRunningTime="2026-01-26 18:43:41.588262401 +0000 UTC m=+106.153169123" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.766415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.766463 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.766503 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:41 crc kubenswrapper[4770]: E0126 18:43:41.766864 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:41 crc kubenswrapper[4770]: E0126 18:43:41.767076 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:41 crc kubenswrapper[4770]: E0126 18:43:41.767164 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.768376 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:46:10.1388104 +0000 UTC Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.768455 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 18:43:41 crc kubenswrapper[4770]: I0126 18:43:41.777934 4770 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 18:43:42 crc kubenswrapper[4770]: I0126 18:43:42.345636 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" event={"ID":"ef5ffe42-9809-4e30-9058-28f9d34c2904","Type":"ContainerStarted","Data":"7e2d8889912323ff6355c46982f1125412bf68ef6e4acb2956dbc3d9952dc6fc"} Jan 26 18:43:42 crc kubenswrapper[4770]: I0126 18:43:42.345689 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" event={"ID":"ef5ffe42-9809-4e30-9058-28f9d34c2904","Type":"ContainerStarted","Data":"ff53622d469ba7b348af4243128e0f2732241941ab4fcedcebe0d851841b2540"} Jan 26 18:43:42 crc kubenswrapper[4770]: I0126 18:43:42.766572 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:42 crc kubenswrapper[4770]: E0126 18:43:42.766798 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:43 crc kubenswrapper[4770]: I0126 18:43:43.766479 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:43 crc kubenswrapper[4770]: I0126 18:43:43.766531 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:43 crc kubenswrapper[4770]: E0126 18:43:43.766598 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:43 crc kubenswrapper[4770]: I0126 18:43:43.766619 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:43 crc kubenswrapper[4770]: E0126 18:43:43.766678 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:43 crc kubenswrapper[4770]: E0126 18:43:43.766795 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:44 crc kubenswrapper[4770]: I0126 18:43:44.767144 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:44 crc kubenswrapper[4770]: E0126 18:43:44.767952 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:45 crc kubenswrapper[4770]: I0126 18:43:45.766882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:45 crc kubenswrapper[4770]: I0126 18:43:45.766910 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:45 crc kubenswrapper[4770]: I0126 18:43:45.768226 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:45 crc kubenswrapper[4770]: E0126 18:43:45.768200 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:45 crc kubenswrapper[4770]: E0126 18:43:45.768300 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:45 crc kubenswrapper[4770]: E0126 18:43:45.768845 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:45 crc kubenswrapper[4770]: I0126 18:43:45.769174 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:43:45 crc kubenswrapper[4770]: E0126 18:43:45.769367 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lgvzv_openshift-ovn-kubernetes(49551d69-752c-4bcd-b265-d98a3ec92838)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" Jan 26 18:43:46 crc kubenswrapper[4770]: I0126 18:43:46.767170 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:46 crc kubenswrapper[4770]: E0126 18:43:46.767406 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:47 crc kubenswrapper[4770]: I0126 18:43:47.766938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:47 crc kubenswrapper[4770]: E0126 18:43:47.767053 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:47 crc kubenswrapper[4770]: I0126 18:43:47.767155 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:47 crc kubenswrapper[4770]: I0126 18:43:47.767177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:47 crc kubenswrapper[4770]: E0126 18:43:47.767314 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:47 crc kubenswrapper[4770]: E0126 18:43:47.767446 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:48 crc kubenswrapper[4770]: I0126 18:43:48.766597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:48 crc kubenswrapper[4770]: E0126 18:43:48.766801 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.376117 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/1.log" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.376739 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/0.log" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.376802 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf1d4063-db34-411a-bdbc-3736acf7f126" containerID="7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0" exitCode=1 Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.376851 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerDied","Data":"7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0"} Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.376926 4770 scope.go:117] "RemoveContainer" containerID="4caa20ac4fea0f9e7742a506b51a4dd2377aa2293d2dfe5eb948edd5aa8af499" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.377427 4770 scope.go:117] "RemoveContainer" containerID="7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0" Jan 26 18:43:49 crc kubenswrapper[4770]: E0126 18:43:49.377609 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-f87gd_openshift-multus(cf1d4063-db34-411a-bdbc-3736acf7f126)\"" pod="openshift-multus/multus-f87gd" podUID="cf1d4063-db34-411a-bdbc-3736acf7f126" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.400414 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2bh52" podStartSLOduration=93.400382483 podStartE2EDuration="1m33.400382483s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:43:42.367381497 +0000 UTC m=+106.932288229" watchObservedRunningTime="2026-01-26 18:43:49.400382483 +0000 UTC m=+113.965289235" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.766292 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.766451 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:49 crc kubenswrapper[4770]: E0126 18:43:49.766565 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:49 crc kubenswrapper[4770]: E0126 18:43:49.766922 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:49 crc kubenswrapper[4770]: I0126 18:43:49.767327 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:49 crc kubenswrapper[4770]: E0126 18:43:49.767485 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:50 crc kubenswrapper[4770]: I0126 18:43:50.381844 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/1.log" Jan 26 18:43:50 crc kubenswrapper[4770]: I0126 18:43:50.766660 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:50 crc kubenswrapper[4770]: E0126 18:43:50.766925 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:51 crc kubenswrapper[4770]: I0126 18:43:51.766903 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:51 crc kubenswrapper[4770]: I0126 18:43:51.767078 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:51 crc kubenswrapper[4770]: I0126 18:43:51.767246 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:51 crc kubenswrapper[4770]: E0126 18:43:51.767228 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:51 crc kubenswrapper[4770]: E0126 18:43:51.767477 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:51 crc kubenswrapper[4770]: E0126 18:43:51.767598 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:52 crc kubenswrapper[4770]: I0126 18:43:52.766908 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:52 crc kubenswrapper[4770]: E0126 18:43:52.767103 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:53 crc kubenswrapper[4770]: I0126 18:43:53.767046 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:53 crc kubenswrapper[4770]: I0126 18:43:53.767105 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:53 crc kubenswrapper[4770]: I0126 18:43:53.767178 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:53 crc kubenswrapper[4770]: E0126 18:43:53.767252 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:53 crc kubenswrapper[4770]: E0126 18:43:53.767410 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:53 crc kubenswrapper[4770]: E0126 18:43:53.767591 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:54 crc kubenswrapper[4770]: I0126 18:43:54.766392 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:54 crc kubenswrapper[4770]: E0126 18:43:54.766594 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:55 crc kubenswrapper[4770]: E0126 18:43:55.739355 4770 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 18:43:55 crc kubenswrapper[4770]: I0126 18:43:55.766079 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:55 crc kubenswrapper[4770]: I0126 18:43:55.766102 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:55 crc kubenswrapper[4770]: E0126 18:43:55.768771 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:55 crc kubenswrapper[4770]: I0126 18:43:55.768899 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:55 crc kubenswrapper[4770]: E0126 18:43:55.769048 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:55 crc kubenswrapper[4770]: E0126 18:43:55.769110 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:55 crc kubenswrapper[4770]: E0126 18:43:55.859364 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:43:56 crc kubenswrapper[4770]: I0126 18:43:56.766421 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:56 crc kubenswrapper[4770]: E0126 18:43:56.766795 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:57 crc kubenswrapper[4770]: I0126 18:43:57.766204 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:57 crc kubenswrapper[4770]: I0126 18:43:57.766269 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:57 crc kubenswrapper[4770]: I0126 18:43:57.766348 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:57 crc kubenswrapper[4770]: E0126 18:43:57.766470 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:43:57 crc kubenswrapper[4770]: E0126 18:43:57.766593 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:57 crc kubenswrapper[4770]: E0126 18:43:57.766721 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:58 crc kubenswrapper[4770]: I0126 18:43:58.766953 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:43:58 crc kubenswrapper[4770]: E0126 18:43:58.767148 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:43:59 crc kubenswrapper[4770]: I0126 18:43:59.767057 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:43:59 crc kubenswrapper[4770]: I0126 18:43:59.767171 4770 scope.go:117] "RemoveContainer" containerID="7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0" Jan 26 18:43:59 crc kubenswrapper[4770]: E0126 18:43:59.767285 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:43:59 crc kubenswrapper[4770]: I0126 18:43:59.767313 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:43:59 crc kubenswrapper[4770]: E0126 18:43:59.767539 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:43:59 crc kubenswrapper[4770]: I0126 18:43:59.767624 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:43:59 crc kubenswrapper[4770]: E0126 18:43:59.767799 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:44:00 crc kubenswrapper[4770]: I0126 18:44:00.424110 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/1.log" Jan 26 18:44:00 crc kubenswrapper[4770]: I0126 18:44:00.424190 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerStarted","Data":"1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845"} Jan 26 18:44:00 crc kubenswrapper[4770]: I0126 18:44:00.766112 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:00 crc kubenswrapper[4770]: E0126 18:44:00.766300 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:44:00 crc kubenswrapper[4770]: I0126 18:44:00.767343 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:44:00 crc kubenswrapper[4770]: E0126 18:44:00.860744 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.430104 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/3.log" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.433458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerStarted","Data":"3dbc66c1327f6362b589dffd636803e9bc715970fe8b65bf078d6ef91b2d88dd"} Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.433945 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.461842 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podStartSLOduration=105.461823831 podStartE2EDuration="1m45.461823831s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:01.461321128 +0000 UTC m=+126.026227900" watchObservedRunningTime="2026-01-26 18:44:01.461823831 +0000 UTC m=+126.026730563" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.663229 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bqfpk"] Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.663393 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:01 crc kubenswrapper[4770]: E0126 18:44:01.663537 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.767257 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.767448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:01 crc kubenswrapper[4770]: E0126 18:44:01.767625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:44:01 crc kubenswrapper[4770]: I0126 18:44:01.767668 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:01 crc kubenswrapper[4770]: E0126 18:44:01.767847 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:44:01 crc kubenswrapper[4770]: E0126 18:44:01.767966 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:44:02 crc kubenswrapper[4770]: I0126 18:44:02.766248 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:02 crc kubenswrapper[4770]: E0126 18:44:02.766520 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:44:03 crc kubenswrapper[4770]: I0126 18:44:03.767140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:03 crc kubenswrapper[4770]: I0126 18:44:03.767206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:03 crc kubenswrapper[4770]: E0126 18:44:03.767399 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:44:03 crc kubenswrapper[4770]: I0126 18:44:03.767420 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:03 crc kubenswrapper[4770]: E0126 18:44:03.767550 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:44:03 crc kubenswrapper[4770]: E0126 18:44:03.767765 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:44:04 crc kubenswrapper[4770]: I0126 18:44:04.766674 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:04 crc kubenswrapper[4770]: E0126 18:44:04.767361 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bqfpk" podUID="f836a816-01c1-448b-9736-c65a8f4f0044" Jan 26 18:44:05 crc kubenswrapper[4770]: I0126 18:44:05.766873 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:05 crc kubenswrapper[4770]: I0126 18:44:05.766887 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:05 crc kubenswrapper[4770]: E0126 18:44:05.768904 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 18:44:05 crc kubenswrapper[4770]: I0126 18:44:05.768951 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:05 crc kubenswrapper[4770]: E0126 18:44:05.769140 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 18:44:05 crc kubenswrapper[4770]: E0126 18:44:05.769341 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 18:44:06 crc kubenswrapper[4770]: I0126 18:44:06.767068 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:06 crc kubenswrapper[4770]: I0126 18:44:06.770558 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 18:44:06 crc kubenswrapper[4770]: I0126 18:44:06.770562 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.766847 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.766909 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.766868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.769535 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.769856 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.770757 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 18:44:07 crc kubenswrapper[4770]: I0126 18:44:07.773501 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.923978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.981277 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.981955 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.982664 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.983103 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.986641 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.987687 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zm2q9"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.988383 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.988786 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpfp2"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.989260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.989322 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.990044 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.991988 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.992104 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.992957 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.995162 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:44:11 crc kubenswrapper[4770]: W0126 18:44:11.995342 4770 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:11 crc kubenswrapper[4770]: E0126 18:44:11.995377 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.996639 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.997095 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pdg7h"] Jan 26 18:44:11 crc kubenswrapper[4770]: I0126 18:44:11.997638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.009373 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.009774 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.012922 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.013446 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.013538 4770 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: configmaps "console-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.013565 4770 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: secrets "console-operator-dockercfg-4xjcr" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.013577 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.013591 4770 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.013606 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-operator-dockercfg-4xjcr\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.013630 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.014181 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.015264 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.015663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.019931 4770 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.019946 4770 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.019963 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.019971 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.019994 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.020072 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.019946 4770 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.020446 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.022859 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.023123 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.023453 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.024348 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.024439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.024807 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.050032 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2b2nm"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.050512 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.068559 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.068832 4770 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.068868 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.068921 4770 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.068937 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.068983 4770 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.068997 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: W0126 18:44:12.069044 4770 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 26 18:44:12 crc kubenswrapper[4770]: E0126 18:44:12.069056 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.069211 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.069648 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.069943 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.070041 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.074784 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.075231 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.076116 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.076217 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.076328 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.077983 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.078231 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.078680 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.078781 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079035 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079135 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079211 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079280 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079419 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079521 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079602 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079669 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.079765 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.086800 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lndnr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.087948 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.088327 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.088442 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.088587 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.089287 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.089407 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.089904 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.110335 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.110341 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.111129 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.111474 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jnn7h"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.111852 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.112103 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.112797 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.112964 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.113607 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.113998 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.114379 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.116870 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117163 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117209 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgq8p\" (UniqueName: \"kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117288 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-auth-proxy-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117308 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-policies\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117355 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-encryption-config\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117376 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117402 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wpl\" (UniqueName: \"kubernetes.io/projected/6e0fe412-7289-4f74-8039-b436ebac13e6-kube-api-access-56wpl\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117471 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-images\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117493 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117514 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117537 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9c6078e-9f01-4aab-adff-db90e6ddedfe-machine-approver-tls\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117560 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117583 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd4eed4-e59b-4987-936a-b880b81311a1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117605 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-config\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117629 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkw7k\" (UniqueName: \"kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117681 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-client\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117726 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-dir\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117755 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117777 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117802 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-service-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smz2w\" (UniqueName: \"kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-serving-cert\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm26r\" (UniqueName: \"kubernetes.io/projected/d9c6078e-9f01-4aab-adff-db90e6ddedfe-kube-api-access-tm26r\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117925 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117947 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c699m\" (UniqueName: \"kubernetes.io/projected/4cd4eed4-e59b-4987-936a-b880b81311a1-kube-api-access-c699m\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.117967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.118000 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.118022 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw947\" (UniqueName: \"kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.118046 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-config\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.118069 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d943bce1-c743-4eea-99b2-e38c69a22211-serving-cert\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.118093 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzpf\" (UniqueName: \"kubernetes.io/projected/d943bce1-c743-4eea-99b2-e38c69a22211-kube-api-access-bnzpf\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.123268 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.123400 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64f7c"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.123874 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.126128 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.126219 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjdzl"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.126786 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.127560 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78h7b"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.127658 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.129601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.132580 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.133254 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.139304 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.139863 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.140295 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.140673 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.141835 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.142158 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.148763 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.149127 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.149379 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.149453 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.149663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.151326 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.151741 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.164695 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165154 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165295 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165448 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165569 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165689 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165732 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165824 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165849 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165900 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.165997 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.166031 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.166039 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.166114 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.166876 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.168593 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.170151 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.176857 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.176932 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.176857 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.176865 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177064 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177183 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177191 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177252 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177252 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177292 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177337 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177367 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177404 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177450 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177480 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.177539 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.178235 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.178327 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.178590 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.178779 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.179421 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.179615 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.179942 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.181245 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.181817 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.182777 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.183159 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.183559 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.184182 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.185833 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.189783 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.191132 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.193154 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.194709 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.202412 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpfp2"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.223148 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.223915 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82pv2"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.224033 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.217298 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.224431 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.224737 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.220774 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.224868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.224944 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225016 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pdg7h"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225089 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.219130 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225466 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mv7t\" (UniqueName: \"kubernetes.io/projected/6fff6531-8ffa-478f-977b-a9daf12938fe-kube-api-access-2mv7t\") pod \"downloads-7954f5f757-jnn7h\" (UID: \"6fff6531-8ffa-478f-977b-a9daf12938fe\") " pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c699m\" (UniqueName: \"kubernetes.io/projected/4cd4eed4-e59b-4987-936a-b880b81311a1-kube-api-access-c699m\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225545 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lkn\" (UniqueName: \"kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225564 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw947\" (UniqueName: \"kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225596 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-service-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-config\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225638 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225659 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlx7d\" (UniqueName: \"kubernetes.io/projected/c2e69bd3-7fa0-4687-9588-33fd56627615-kube-api-access-vlx7d\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69397d9a-26a6-4ce7-806b-59fca2691a73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225715 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d943bce1-c743-4eea-99b2-e38c69a22211-serving-cert\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225731 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-client\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225750 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225765 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-node-pullsecrets\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225780 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69397d9a-26a6-4ce7-806b-59fca2691a73-config\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225799 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzpf\" (UniqueName: \"kubernetes.io/projected/d943bce1-c743-4eea-99b2-e38c69a22211-kube-api-access-bnzpf\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225815 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gzk\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-kube-api-access-42gzk\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225837 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc59d647-0338-4bd2-a850-3e2ede6fa766-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225872 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225891 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225908 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225922 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc59d647-0338-4bd2-a850-3e2ede6fa766-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35e8bd20-c06e-486c-b8c7-0e60df48448b-serving-cert\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.225982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit-dir\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226000 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226016 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgq8p\" (UniqueName: \"kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226031 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-auth-proxy-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226069 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-policies\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-client\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226112 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226132 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-encryption-config\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226155 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69397d9a-26a6-4ce7-806b-59fca2691a73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226171 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226201 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-serving-cert\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226216 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2e69bd3-7fa0-4687-9588-33fd56627615-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226235 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226254 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-serving-cert\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfrbj\" (UniqueName: \"kubernetes.io/projected/98a8f114-013f-4c87-892a-696c15825932-kube-api-access-tfrbj\") pod \"migrator-59844c95c7-nxckq\" (UID: \"98a8f114-013f-4c87-892a-696c15825932\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226300 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226323 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56wpl\" (UniqueName: \"kubernetes.io/projected/6e0fe412-7289-4f74-8039-b436ebac13e6-kube-api-access-56wpl\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-images\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226365 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226387 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226427 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9c6078e-9f01-4aab-adff-db90e6ddedfe-machine-approver-tls\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd4eed4-e59b-4987-936a-b880b81311a1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-config\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxvw\" (UniqueName: \"kubernetes.io/projected/35e8bd20-c06e-486c-b8c7-0e60df48448b-kube-api-access-xlxvw\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226554 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226569 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkw7k\" (UniqueName: \"kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-client\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226602 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226615 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-image-import-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226633 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-dir\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226666 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226692 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35e8bd20-c06e-486c-b8c7-0e60df48448b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226726 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-encryption-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226741 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226760 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226778 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-service-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226795 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qth\" (UniqueName: \"kubernetes.io/projected/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-kube-api-access-b6qth\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226813 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-serving-cert\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226841 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226858 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smz2w\" (UniqueName: \"kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-config\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226923 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5997\" (UniqueName: \"kubernetes.io/projected/2a7b733e-ad98-408b-a125-1e4f0953dafa-kube-api-access-j5997\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm26r\" (UniqueName: \"kubernetes.io/projected/d9c6078e-9f01-4aab-adff-db90e6ddedfe-kube-api-access-tm26r\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226957 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.226972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-serving-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227000 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24x8\" (UniqueName: \"kubernetes.io/projected/650860ca-e588-4148-b22f-1f4e7ba16b2d-kube-api-access-j24x8\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227017 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227033 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227050 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227069 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227705 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-config\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-images\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227906 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.227926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.228174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-service-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.228315 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.228981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-dir\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.229811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d9c6078e-9f01-4aab-adff-db90e6ddedfe-auth-proxy-config\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.229863 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d943bce1-c743-4eea-99b2-e38c69a22211-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.230489 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tl5vr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.231972 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.232632 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.231579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cd4eed4-e59b-4987-936a-b880b81311a1-config\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.231895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.231489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e0fe412-7289-4f74-8039-b436ebac13e6-audit-policies\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.231080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.235860 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-serving-cert\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.236043 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7hgt"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.236236 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.236548 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.236776 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.236850 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.237156 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.237572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.237591 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.237864 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.237876 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.238200 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.238338 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.238492 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.238493 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.239655 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.239732 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.240573 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-etcd-client\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.241983 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zm2q9"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.242017 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.253947 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd4eed4-e59b-4987-936a-b880b81311a1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.254299 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d943bce1-c743-4eea-99b2-e38c69a22211-serving-cert\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.254572 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e0fe412-7289-4f74-8039-b436ebac13e6-encryption-config\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.256489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d9c6078e-9f01-4aab-adff-db90e6ddedfe-machine-approver-tls\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.258384 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.258681 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjdzl"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.261242 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.262270 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.263377 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lndnr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.263804 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.264282 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.265261 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2b2nm"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.266185 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.267135 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.268066 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.269088 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jnn7h"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.270012 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82pv2"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.271001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7hgt"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.271880 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.272864 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.273858 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78h7b"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.274851 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.275797 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4q2sd"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.278050 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.278074 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gzn9s"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.278437 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.278732 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.279006 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lb2h8"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.280045 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.280177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.280918 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.281870 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.282806 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.283889 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.284238 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.284910 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64f7c"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.285881 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4q2sd"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.286832 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.287787 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lb2h8"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.288772 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.289660 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.290638 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.291726 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cs5nv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.292463 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.292680 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cs5nv"] Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.304292 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.324413 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-serving-cert\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327540 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfrbj\" (UniqueName: \"kubernetes.io/projected/98a8f114-013f-4c87-892a-696c15825932-kube-api-access-tfrbj\") pod \"migrator-59844c95c7-nxckq\" (UID: \"98a8f114-013f-4c87-892a-696c15825932\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327562 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327584 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327610 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxvw\" (UniqueName: \"kubernetes.io/projected/35e8bd20-c06e-486c-b8c7-0e60df48448b-kube-api-access-xlxvw\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327669 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-image-import-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327698 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35e8bd20-c06e-486c-b8c7-0e60df48448b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327732 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-encryption-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327750 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327816 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qth\" (UniqueName: \"kubernetes.io/projected/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-kube-api-access-b6qth\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327832 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-config\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327919 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5997\" (UniqueName: \"kubernetes.io/projected/2a7b733e-ad98-408b-a125-1e4f0953dafa-kube-api-access-j5997\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327974 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-serving-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.327993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328089 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/35e8bd20-c06e-486c-b8c7-0e60df48448b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328468 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328700 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-config\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328882 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24x8\" (UniqueName: \"kubernetes.io/projected/650860ca-e588-4148-b22f-1f4e7ba16b2d-kube-api-access-j24x8\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328904 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-image-import-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mv7t\" (UniqueName: \"kubernetes.io/projected/6fff6531-8ffa-478f-977b-a9daf12938fe-kube-api-access-2mv7t\") pod \"downloads-7954f5f757-jnn7h\" (UID: \"6fff6531-8ffa-478f-977b-a9daf12938fe\") " pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2lkn\" (UniqueName: \"kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.328837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329152 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-serving-ca\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-service-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329540 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329803 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlx7d\" (UniqueName: \"kubernetes.io/projected/c2e69bd3-7fa0-4687-9588-33fd56627615-kube-api-access-vlx7d\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-client\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329858 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69397d9a-26a6-4ce7-806b-59fca2691a73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329875 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-node-pullsecrets\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.329957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330095 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-node-pullsecrets\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330118 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-service-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330158 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69397d9a-26a6-4ce7-806b-59fca2691a73-config\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gzk\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-kube-api-access-42gzk\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330209 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc59d647-0338-4bd2-a850-3e2ede6fa766-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330250 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330290 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330326 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35e8bd20-c06e-486c-b8c7-0e60df48448b-serving-cert\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit-dir\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc59d647-0338-4bd2-a850-3e2ede6fa766-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-client\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330434 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69397d9a-26a6-4ce7-806b-59fca2691a73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-serving-cert\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2e69bd3-7fa0-4687-9588-33fd56627615-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330484 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.330912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.331254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-ca\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.331452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/650860ca-e588-4148-b22f-1f4e7ba16b2d-audit-dir\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.331511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.331528 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-encryption-config\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.331864 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cc59d647-0338-4bd2-a850-3e2ede6fa766-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.332052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.332500 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.332825 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.333440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-etcd-client\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.333838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc59d647-0338-4bd2-a850-3e2ede6fa766-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334191 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-etcd-client\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334519 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a7b733e-ad98-408b-a125-1e4f0953dafa-serving-cert\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334681 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334733 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334805 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650860ca-e588-4148-b22f-1f4e7ba16b2d-serving-cert\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.334897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.335032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2e69bd3-7fa0-4687-9588-33fd56627615-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.335611 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.336326 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35e8bd20-c06e-486c-b8c7-0e60df48448b-serving-cert\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.337545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.343576 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.363685 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.383528 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.414324 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.424710 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.444226 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.463933 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.484453 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.503968 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.514148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69397d9a-26a6-4ce7-806b-59fca2691a73-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.525085 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.530788 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69397d9a-26a6-4ce7-806b-59fca2691a73-config\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.544025 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.563678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.584566 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.605119 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.664562 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.684912 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.705150 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.724914 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.745212 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.765124 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.784520 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.805846 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.825257 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.844803 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.865558 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.885675 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.904439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.925192 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.945975 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.966007 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 18:44:12 crc kubenswrapper[4770]: I0126 18:44:12.985533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.004744 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.024857 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.044950 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.065174 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.084792 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.113558 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.124962 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.144192 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.164618 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.184014 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.204603 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.224203 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229239 4770 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229277 4770 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229351 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca podName:8a0fb56c-a92c-4b40-bac2-a8cd958035f0 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:13.729319195 +0000 UTC m=+138.294225927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca") pod "console-operator-58897d9998-hpfp2" (UID: "8a0fb56c-a92c-4b40-bac2-a8cd958035f0") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229394 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert podName:8a0fb56c-a92c-4b40-bac2-a8cd958035f0 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:13.729362306 +0000 UTC m=+138.294269248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert") pod "console-operator-58897d9998-hpfp2" (UID: "8a0fb56c-a92c-4b40-bac2-a8cd958035f0") : failed to sync secret cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229644 4770 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229656 4770 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229691 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config podName:95c195e6-53d6-46c5-bc06-f084727fec7b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:13.729682595 +0000 UTC m=+138.294589327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config") pod "openshift-apiserver-operator-796bbdcf4f-m22qb" (UID: "95c195e6-53d6-46c5-bc06-f084727fec7b") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229808 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config podName:8a0fb56c-a92c-4b40-bac2-a8cd958035f0 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:13.729776938 +0000 UTC m=+138.294683840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config") pod "console-operator-58897d9998-hpfp2" (UID: "8a0fb56c-a92c-4b40-bac2-a8cd958035f0") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.229938 4770 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: E0126 18:44:13.230156 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert podName:95c195e6-53d6-46c5-bc06-f084727fec7b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:13.730123477 +0000 UTC m=+138.295030239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-m22qb" (UID: "95c195e6-53d6-46c5-bc06-f084727fec7b") : failed to sync secret cache: timed out waiting for the condition Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.242412 4770 request.go:700] Waited for 1.016986931s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.244273 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.265673 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.303231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c699m\" (UniqueName: \"kubernetes.io/projected/4cd4eed4-e59b-4987-936a-b880b81311a1-kube-api-access-c699m\") pod \"machine-api-operator-5694c8668f-zm2q9\" (UID: \"4cd4eed4-e59b-4987-936a-b880b81311a1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.338176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw947\" (UniqueName: \"kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947\") pod \"controller-manager-879f6c89f-h8sjr\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.363756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzpf\" (UniqueName: \"kubernetes.io/projected/d943bce1-c743-4eea-99b2-e38c69a22211-kube-api-access-bnzpf\") pod \"authentication-operator-69f744f599-pdg7h\" (UID: \"d943bce1-c743-4eea-99b2-e38c69a22211\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.400796 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm26r\" (UniqueName: \"kubernetes.io/projected/d9c6078e-9f01-4aab-adff-db90e6ddedfe-kube-api-access-tm26r\") pod \"machine-approver-56656f9798-bmthj\" (UID: \"d9c6078e-9f01-4aab-adff-db90e6ddedfe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.434770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgq8p\" (UniqueName: \"kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p\") pod \"route-controller-manager-6576b87f9c-fvbpk\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.445619 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.452384 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56wpl\" (UniqueName: \"kubernetes.io/projected/6e0fe412-7289-4f74-8039-b436ebac13e6-kube-api-access-56wpl\") pod \"apiserver-7bbb656c7d-rj6f7\" (UID: \"6e0fe412-7289-4f74-8039-b436ebac13e6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.465404 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.484951 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.497032 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.504458 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.510991 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.517405 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.524919 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.546216 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.548243 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.558059 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.565489 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.566960 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.585684 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.605642 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.624374 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.645311 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.667521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.684798 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.704560 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.725348 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.747144 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.754748 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.754835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.755044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.755074 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.755110 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.765281 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.786038 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.804435 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.812895 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.825691 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: W0126 18:44:13.828904 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6475f7_5a18_43bd_bb55_c7d4a3bd33db.slice/crio-8089276758b33063c8d4c8e0288428a7463562c5e796b1d7a2d014c463d740ed WatchSource:0}: Error finding container 8089276758b33063c8d4c8e0288428a7463562c5e796b1d7a2d014c463d740ed: Status 404 returned error can't find the container with id 8089276758b33063c8d4c8e0288428a7463562c5e796b1d7a2d014c463d740ed Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.835857 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pdg7h"] Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.837628 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zm2q9"] Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.844906 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.866399 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.883786 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.910035 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.924107 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.944148 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.964761 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 18:44:13 crc kubenswrapper[4770]: I0126 18:44:13.986145 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.005287 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.027192 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.044627 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.067372 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.084324 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.092874 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7"] Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.093966 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.105017 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.125155 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.145427 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.165014 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.185867 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.204820 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.242419 4770 request.go:700] Waited for 1.914549186s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.242595 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfrbj\" (UniqueName: \"kubernetes.io/projected/98a8f114-013f-4c87-892a-696c15825932-kube-api-access-tfrbj\") pod \"migrator-59844c95c7-nxckq\" (UID: \"98a8f114-013f-4c87-892a-696c15825932\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.268403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxvw\" (UniqueName: \"kubernetes.io/projected/35e8bd20-c06e-486c-b8c7-0e60df48448b-kube-api-access-xlxvw\") pod \"openshift-config-operator-7777fb866f-78h7b\" (UID: \"35e8bd20-c06e-486c-b8c7-0e60df48448b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.292315 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5997\" (UniqueName: \"kubernetes.io/projected/2a7b733e-ad98-408b-a125-1e4f0953dafa-kube-api-access-j5997\") pod \"etcd-operator-b45778765-64f7c\" (UID: \"2a7b733e-ad98-408b-a125-1e4f0953dafa\") " pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.305080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mv7t\" (UniqueName: \"kubernetes.io/projected/6fff6531-8ffa-478f-977b-a9daf12938fe-kube-api-access-2mv7t\") pod \"downloads-7954f5f757-jnn7h\" (UID: \"6fff6531-8ffa-478f-977b-a9daf12938fe\") " pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.328129 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.333892 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.339258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qth\" (UniqueName: \"kubernetes.io/projected/c9ca31d9-c0f7-4bb1-8309-5481cefb40bd-kube-api-access-b6qth\") pod \"openshift-controller-manager-operator-756b6f6bc6-4d6cp\" (UID: \"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.348354 4770 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.370148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24x8\" (UniqueName: \"kubernetes.io/projected/650860ca-e588-4148-b22f-1f4e7ba16b2d-kube-api-access-j24x8\") pod \"apiserver-76f77b778f-lndnr\" (UID: \"650860ca-e588-4148-b22f-1f4e7ba16b2d\") " pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.373512 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.381583 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.388037 4770 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.391692 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2lkn\" (UniqueName: \"kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn\") pod \"oauth-openshift-558db77b4-2b2nm\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.402549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlx7d\" (UniqueName: \"kubernetes.io/projected/c2e69bd3-7fa0-4687-9588-33fd56627615-kube-api-access-vlx7d\") pod \"multus-admission-controller-857f4d67dd-hjdzl\" (UID: \"c2e69bd3-7fa0-4687-9588-33fd56627615\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.416808 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gzk\" (UniqueName: \"kubernetes.io/projected/cc59d647-0338-4bd2-a850-3e2ede6fa766-kube-api-access-42gzk\") pod \"cluster-image-registry-operator-dc59b4c8b-qc9kl\" (UID: \"cc59d647-0338-4bd2-a850-3e2ede6fa766\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.440032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69397d9a-26a6-4ce7-806b-59fca2691a73-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-v5lcr\" (UID: \"69397d9a-26a6-4ce7-806b-59fca2691a73\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.482313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" event={"ID":"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db","Type":"ContainerStarted","Data":"8089276758b33063c8d4c8e0288428a7463562c5e796b1d7a2d014c463d740ed"} Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.483507 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" event={"ID":"d9c6078e-9f01-4aab-adff-db90e6ddedfe","Type":"ContainerStarted","Data":"4b8ba42ab7c40db2e68462ca502574c9740fa2e9593ec6f0665626cf6221dc6e"} Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.485255 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.505186 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.525252 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.528715 4770 projected.go:194] Error preparing data for projected volume kube-api-access-smz2w for pod openshift-console-operator/console-operator-58897d9998-hpfp2: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.528816 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w podName:8a0fb56c-a92c-4b40-bac2-a8cd958035f0 nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.028793296 +0000 UTC m=+139.593700038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-smz2w" (UniqueName: "kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w") pod "console-operator-58897d9998-hpfp2" (UID: "8a0fb56c-a92c-4b40-bac2-a8cd958035f0") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.554530 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.557267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-trusted-ca\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.564514 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.566645 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-config\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.585669 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.601499 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c195e6-53d6-46c5-bc06-f084727fec7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.606624 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.617017 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c195e6-53d6-46c5-bc06-f084727fec7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.625155 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.629145 4770 projected.go:194] Error preparing data for projected volume kube-api-access-kkw7k for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb: failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.629298 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k podName:95c195e6-53d6-46c5-bc06-f084727fec7b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.129250957 +0000 UTC m=+139.694157729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kkw7k" (UniqueName: "kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k") pod "openshift-apiserver-operator-796bbdcf4f-m22qb" (UID: "95c195e6-53d6-46c5-bc06-f084727fec7b") : failed to sync configmap cache: timed out waiting for the condition Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.644438 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.665239 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.671021 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-serving-cert\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.684891 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.976194 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.976691 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.977039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.977210 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.977810 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.979293 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.980969 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.986245 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.986371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:14 crc kubenswrapper[4770]: I0126 18:44:14.986600 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:14 crc kubenswrapper[4770]: E0126 18:44:14.991162 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.491126483 +0000 UTC m=+140.056033245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.094457 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.095244 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.595209364 +0000 UTC m=+140.160116106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrr59\" (UniqueName: \"kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrfk\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095921 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smz2w\" (UniqueName: \"kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095962 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.095986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.096018 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.096565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.096715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.096868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.097051 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.597028065 +0000 UTC m=+140.161934807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097660 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097817 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76233aff-879a-4848-8f11-b75d2fa524b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.097997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.098014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.098053 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl66\" (UniqueName: \"kubernetes.io/projected/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-kube-api-access-cxl66\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.098090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76233aff-879a-4848-8f11-b75d2fa524b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.098194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76233aff-879a-4848-8f11-b75d2fa524b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.099802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.102129 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.102412 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smz2w\" (UniqueName: \"kubernetes.io/projected/8a0fb56c-a92c-4b40-bac2-a8cd958035f0-kube-api-access-smz2w\") pod \"console-operator-58897d9998-hpfp2\" (UID: \"8a0fb56c-a92c-4b40-bac2-a8cd958035f0\") " pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.199205 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.199862 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.69981577 +0000 UTC m=+140.264722502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.200187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-key\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.200262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-cabundle\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.200351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.203646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.200374 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208521 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-csi-data-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-registration-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29c959ef-d865-49e3-af00-eef8726e6cb2-proxy-tls\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dzf\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-kube-api-access-q4dzf\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208772 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.208979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.209303 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.709288192 +0000 UTC m=+140.274194924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.209659 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e81c2eec-e611-4338-abe6-50e0551b3e44-cert\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210233 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8fb\" (UniqueName: \"kubernetes.io/projected/ca71a19b-a881-4fe9-b826-0814de7abe2b-kube-api-access-kh8fb\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210307 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210405 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f522286-ca46-4767-8813-5d5079d1d108-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210429 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7x2t\" (UniqueName: \"kubernetes.io/projected/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-kube-api-access-g7x2t\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210466 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-default-certificate\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a053962a-e909-45aa-8514-8eab47372fcb-serving-cert\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210521 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkng\" (UniqueName: \"kubernetes.io/projected/a053962a-e909-45aa-8514-8eab47372fcb-kube-api-access-fpkng\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210569 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhqh\" (UniqueName: \"kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210617 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210644 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-tmpfs\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-stats-auth\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210690 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztnzh\" (UniqueName: \"kubernetes.io/projected/e81c2eec-e611-4338-abe6-50e0551b3e44-kube-api-access-ztnzh\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210727 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-plugins-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210757 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76233aff-879a-4848-8f11-b75d2fa524b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-apiservice-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210854 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.210879 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-metrics-certs\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211081 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235bb3b-6e40-49b5-bf08-9a8f040587f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-metrics-tls\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211140 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jfd2\" (UniqueName: \"kubernetes.io/projected/29c959ef-d865-49e3-af00-eef8726e6cb2-kube-api-access-9jfd2\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211164 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211251 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a053962a-e909-45aa-8514-8eab47372fcb-config\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4m4\" (UniqueName: \"kubernetes.io/projected/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-kube-api-access-6x4m4\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdt5r\" (UniqueName: \"kubernetes.io/projected/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-kube-api-access-wdt5r\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211368 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46410587-7603-41b1-8312-712aa74947ae-proxy-tls\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211500 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-socket-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211546 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrfk\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrcp2\" (UniqueName: \"kubernetes.io/projected/f75e3ecf-a603-443e-b93c-6f1ca0407fec-kube-api-access-xrcp2\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/23cf8f72-83fa-451e-afe9-08b8377f969d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211782 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtvv\" (UniqueName: \"kubernetes.io/projected/3c946c9b-8aca-4750-a9df-9bde5608a7cf-kube-api-access-jjtvv\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211807 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6tfm\" (UniqueName: \"kubernetes.io/projected/4f522286-ca46-4767-8813-5d5079d1d108-kube-api-access-d6tfm\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhvx\" (UniqueName: \"kubernetes.io/projected/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-kube-api-access-7rhvx\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211899 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-node-bootstrap-token\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.211988 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-webhook-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212036 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-config-volume\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212065 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5cf2\" (UniqueName: \"kubernetes.io/projected/1235bb3b-6e40-49b5-bf08-9a8f040587f9-kube-api-access-x5cf2\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-mountpoint-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212127 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jsxz\" (UniqueName: \"kubernetes.io/projected/7bd0341c-5414-42a6-988e-b05c09a2c5c8-kube-api-access-4jsxz\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212155 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-certs\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.212219 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2df4\" (UniqueName: \"kubernetes.io/projected/23cf8f72-83fa-451e-afe9-08b8377f969d-kube-api-access-w2df4\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.215449 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216463 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216559 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216580 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd0341c-5414-42a6-988e-b05c09a2c5c8-service-ca-bundle\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kqn\" (UniqueName: \"kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216622 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-srv-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-images\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-config\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.216902 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-642qv\" (UniqueName: \"kubernetes.io/projected/46410587-7603-41b1-8312-712aa74947ae-kube-api-access-642qv\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217102 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217110 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxl66\" (UniqueName: \"kubernetes.io/projected/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-kube-api-access-cxl66\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217344 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235bb3b-6e40-49b5-bf08-9a8f040587f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76233aff-879a-4848-8f11-b75d2fa524b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkw7k\" (UniqueName: \"kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76233aff-879a-4848-8f11-b75d2fa524b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29c959ef-d865-49e3-af00-eef8726e6cb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217764 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217801 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrr59\" (UniqueName: \"kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217864 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-srv-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1237eddc-c2bc-417f-a757-79ec35624f0b-metrics-tls\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217930 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.217962 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6tq\" (UniqueName: \"kubernetes.io/projected/1237eddc-c2bc-417f-a757-79ec35624f0b-kube-api-access-pq6tq\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.218322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76233aff-879a-4848-8f11-b75d2fa524b5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.219799 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.221809 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76233aff-879a-4848-8f11-b75d2fa524b5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.222316 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.222561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkw7k\" (UniqueName: \"kubernetes.io/projected/95c195e6-53d6-46c5-bc06-f084727fec7b-kube-api-access-kkw7k\") pod \"openshift-apiserver-operator-796bbdcf4f-m22qb\" (UID: \"95c195e6-53d6-46c5-bc06-f084727fec7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.223248 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.223312 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.226232 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.268844 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrfk\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.293360 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.306891 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76233aff-879a-4848-8f11-b75d2fa524b5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g9bgr\" (UID: \"76233aff-879a-4848-8f11-b75d2fa524b5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.308651 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-metrics-tls\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jfd2\" (UniqueName: \"kubernetes.io/projected/29c959ef-d865-49e3-af00-eef8726e6cb2-kube-api-access-9jfd2\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322400 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a053962a-e909-45aa-8514-8eab47372fcb-config\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4m4\" (UniqueName: \"kubernetes.io/projected/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-kube-api-access-6x4m4\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322462 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdt5r\" (UniqueName: \"kubernetes.io/projected/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-kube-api-access-wdt5r\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322481 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46410587-7603-41b1-8312-712aa74947ae-proxy-tls\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322498 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-socket-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322513 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrcp2\" (UniqueName: \"kubernetes.io/projected/f75e3ecf-a603-443e-b93c-6f1ca0407fec-kube-api-access-xrcp2\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/23cf8f72-83fa-451e-afe9-08b8377f969d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322586 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtvv\" (UniqueName: \"kubernetes.io/projected/3c946c9b-8aca-4750-a9df-9bde5608a7cf-kube-api-access-jjtvv\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322609 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6tfm\" (UniqueName: \"kubernetes.io/projected/4f522286-ca46-4767-8813-5d5079d1d108-kube-api-access-d6tfm\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322633 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-node-bootstrap-token\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322648 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-webhook-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhvx\" (UniqueName: \"kubernetes.io/projected/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-kube-api-access-7rhvx\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-config-volume\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322699 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322742 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5cf2\" (UniqueName: \"kubernetes.io/projected/1235bb3b-6e40-49b5-bf08-9a8f040587f9-kube-api-access-x5cf2\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322758 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-mountpoint-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jsxz\" (UniqueName: \"kubernetes.io/projected/7bd0341c-5414-42a6-988e-b05c09a2c5c8-kube-api-access-4jsxz\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322790 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-certs\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322814 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2df4\" (UniqueName: \"kubernetes.io/projected/23cf8f72-83fa-451e-afe9-08b8377f969d-kube-api-access-w2df4\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322878 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd0341c-5414-42a6-988e-b05c09a2c5c8-service-ca-bundle\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322900 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8kqn\" (UniqueName: \"kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-srv-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322942 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-images\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-config\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-642qv\" (UniqueName: \"kubernetes.io/projected/46410587-7603-41b1-8312-712aa74947ae-kube-api-access-642qv\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.322999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323051 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235bb3b-6e40-49b5-bf08-9a8f040587f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29c959ef-d865-49e3-af00-eef8726e6cb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-srv-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323147 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1237eddc-c2bc-417f-a757-79ec35624f0b-metrics-tls\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323165 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6tq\" (UniqueName: \"kubernetes.io/projected/1237eddc-c2bc-417f-a757-79ec35624f0b-kube-api-access-pq6tq\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323182 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-key\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323211 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-cabundle\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323228 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-csi-data-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323251 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e81c2eec-e611-4338-abe6-50e0551b3e44-cert\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323290 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-registration-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323307 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29c959ef-d865-49e3-af00-eef8726e6cb2-proxy-tls\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323325 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4dzf\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-kube-api-access-q4dzf\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323342 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323361 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f522286-ca46-4767-8813-5d5079d1d108-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323379 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh8fb\" (UniqueName: \"kubernetes.io/projected/ca71a19b-a881-4fe9-b826-0814de7abe2b-kube-api-access-kh8fb\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-default-certificate\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323420 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7x2t\" (UniqueName: \"kubernetes.io/projected/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-kube-api-access-g7x2t\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a053962a-e909-45aa-8514-8eab47372fcb-serving-cert\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323466 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkng\" (UniqueName: \"kubernetes.io/projected/a053962a-e909-45aa-8514-8eab47372fcb-kube-api-access-fpkng\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhqh\" (UniqueName: \"kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323507 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-tmpfs\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323522 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-stats-auth\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-plugins-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztnzh\" (UniqueName: \"kubernetes.io/projected/e81c2eec-e611-4338-abe6-50e0551b3e44-kube-api-access-ztnzh\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323570 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323603 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-apiservice-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-metrics-certs\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235bb3b-6e40-49b5-bf08-9a8f040587f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.323679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.323902 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.823887195 +0000 UTC m=+140.388793927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.329071 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.329948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.330462 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a053962a-e909-45aa-8514-8eab47372fcb-config\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.331693 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-metrics-tls\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.332058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-mountpoint-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.332329 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-socket-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.334102 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxl66\" (UniqueName: \"kubernetes.io/projected/8f1ef4aa-d364-4658-8e8a-cd473fcaf81b-kube-api-access-cxl66\") pod \"cluster-samples-operator-665b6dd947-xx2j2\" (UID: \"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.334175 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-registration-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.335089 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46410587-7603-41b1-8312-712aa74947ae-proxy-tls\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.335928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bd0341c-5414-42a6-988e-b05c09a2c5c8-service-ca-bundle\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.341079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/23cf8f72-83fa-451e-afe9-08b8377f969d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.343581 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f522286-ca46-4767-8813-5d5079d1d108-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.345417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-certs\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.345714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-key\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.346390 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-config\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.346923 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46410587-7603-41b1-8312-712aa74947ae-images\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.346993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-signing-cabundle\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.347133 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-csi-data-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.347829 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.348897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1237eddc-c2bc-417f-a757-79ec35624f0b-metrics-tls\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.350015 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/29c959ef-d865-49e3-af00-eef8726e6cb2-proxy-tls\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.350419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-srv-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.352118 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-tmpfs\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.354690 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1235bb3b-6e40-49b5-bf08-9a8f040587f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.355480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-srv-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.355761 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.355845 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca71a19b-a881-4fe9-b826-0814de7abe2b-node-bootstrap-token\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.356426 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-trusted-ca\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.356543 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.357510 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1235bb3b-6e40-49b5-bf08-9a8f040587f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.357573 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-config-volume\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.362140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f75e3ecf-a603-443e-b93c-6f1ca0407fec-plugins-dir\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.362984 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.365059 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/29c959ef-d865-49e3-af00-eef8726e6cb2-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.366182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrr59\" (UniqueName: \"kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59\") pod \"console-f9d7485db-5qzkc\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.366518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a053962a-e909-45aa-8514-8eab47372fcb-serving-cert\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.382264 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.382457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-metrics-certs\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.382285 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e81c2eec-e611-4338-abe6-50e0551b3e44-cert\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.382644 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-apiservice-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.382807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-default-certificate\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.383034 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3c946c9b-8aca-4750-a9df-9bde5608a7cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.383294 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/7bd0341c-5414-42a6-988e-b05c09a2c5c8-stats-auth\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.383479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-metrics-tls\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.387352 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.387912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.394192 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-webhook-cert\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.394730 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5e9cb7f-e595-4a56-928f-691fdb1c93f2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5v997\" (UID: \"b5e9cb7f-e595-4a56-928f-691fdb1c93f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.401165 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jfd2\" (UniqueName: \"kubernetes.io/projected/29c959ef-d865-49e3-af00-eef8726e6cb2-kube-api-access-9jfd2\") pod \"machine-config-controller-84d6567774-wdd8j\" (UID: \"29c959ef-d865-49e3-af00-eef8726e6cb2\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.420752 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4m4\" (UniqueName: \"kubernetes.io/projected/cf831fd5-2de3-4d8e-8c93-2dadcdb72e15-kube-api-access-6x4m4\") pod \"packageserver-d55dfcdfc-zszln\" (UID: \"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.424748 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.425436 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:15.925410274 +0000 UTC m=+140.490317006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.440279 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdt5r\" (UniqueName: \"kubernetes.io/projected/ecc3859c-a7f3-4828-b58a-01b4570f0f7a-kube-api-access-wdt5r\") pod \"service-ca-9c57cc56f-82pv2\" (UID: \"ecc3859c-a7f3-4828-b58a-01b4570f0f7a\") " pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.465629 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrcp2\" (UniqueName: \"kubernetes.io/projected/f75e3ecf-a603-443e-b93c-6f1ca0407fec-kube-api-access-xrcp2\") pod \"csi-hostpathplugin-4q2sd\" (UID: \"f75e3ecf-a603-443e-b93c-6f1ca0407fec\") " pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.483772 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8kqn\" (UniqueName: \"kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn\") pod \"collect-profiles-29490870-vl9jv\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.491603 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-64f7c"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.497074 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.500161 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.505944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" event={"ID":"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db","Type":"ContainerStarted","Data":"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.506350 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.511147 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jsxz\" (UniqueName: \"kubernetes.io/projected/7bd0341c-5414-42a6-988e-b05c09a2c5c8-kube-api-access-4jsxz\") pod \"router-default-5444994796-tl5vr\" (UID: \"7bd0341c-5414-42a6-988e-b05c09a2c5c8\") " pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.513347 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" event={"ID":"d943bce1-c743-4eea-99b2-e38c69a22211","Type":"ContainerStarted","Data":"8bd60e69df548661f9ce821f83d979d3d907624167c33870750d606aca389ed2"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.518064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.525573 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.530331 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.030284047 +0000 UTC m=+140.595190789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.530619 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.531271 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.031261625 +0000 UTC m=+140.596168357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.543565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtvv\" (UniqueName: \"kubernetes.io/projected/3c946c9b-8aca-4750-a9df-9bde5608a7cf-kube-api-access-jjtvv\") pod \"catalog-operator-68c6474976-8sn2b\" (UID: \"3c946c9b-8aca-4750-a9df-9bde5608a7cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.547551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6tfm\" (UniqueName: \"kubernetes.io/projected/4f522286-ca46-4767-8813-5d5079d1d108-kube-api-access-d6tfm\") pod \"package-server-manager-789f6589d5-bvh46\" (UID: \"4f522286-ca46-4767-8813-5d5079d1d108\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.552093 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" event={"ID":"6e0fe412-7289-4f74-8039-b436ebac13e6","Type":"ContainerStarted","Data":"6006d42894222b7b157d36914b4688dfbd83ed621113b2190f06f42808710192"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.552846 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jnn7h"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.559759 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.572335 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" event={"ID":"d9c6078e-9f01-4aab-adff-db90e6ddedfe","Type":"ContainerStarted","Data":"106bcafc26962cb1a14a8c1ebc42dd31908c05441fec9b8cbb1ed75bab9fef8e"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.586895 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.591187 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.598513 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2df4\" (UniqueName: \"kubernetes.io/projected/23cf8f72-83fa-451e-afe9-08b8377f969d-kube-api-access-w2df4\") pod \"control-plane-machine-set-operator-78cbb6b69f-dz75h\" (UID: \"23cf8f72-83fa-451e-afe9-08b8377f969d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.601778 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2b2nm"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.615814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" event={"ID":"a59b659e-3cc4-4463-9499-dfd40eec1d47","Type":"ContainerStarted","Data":"a70eadcd60c57799eec5efa3e561787ed4de47225d13bb47bd170501bc799eb8"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.621619 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.622899 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.628850 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.632186 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.632584 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.132528788 +0000 UTC m=+140.697435520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.632739 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.633815 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.133793133 +0000 UTC m=+140.698699865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.634646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh8fb\" (UniqueName: \"kubernetes.io/projected/ca71a19b-a881-4fe9-b826-0814de7abe2b-kube-api-access-kh8fb\") pod \"machine-config-server-gzn9s\" (UID: \"ca71a19b-a881-4fe9-b826-0814de7abe2b\") " pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.641598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" event={"ID":"4cd4eed4-e59b-4987-936a-b880b81311a1","Type":"ContainerStarted","Data":"b5b3c15679f897e5ceae47d8cf013330153d3b342ba5896c6fa5b07a96060319"} Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.657429 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhqh\" (UniqueName: \"kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh\") pod \"marketplace-operator-79b997595-24pqv\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.661957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7x2t\" (UniqueName: \"kubernetes.io/projected/7fd51ff1-6cc0-45ad-aa7a-44a777720efd-kube-api-access-g7x2t\") pod \"olm-operator-6b444d44fb-mskpv\" (UID: \"7fd51ff1-6cc0-45ad-aa7a-44a777720efd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.681612 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.692919 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.697444 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.706085 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.711371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4dzf\" (UniqueName: \"kubernetes.io/projected/356f3610-472f-41ac-9d8d-7c94ce6b3b1c-kube-api-access-q4dzf\") pod \"ingress-operator-5b745b69d9-xdmd6\" (UID: \"356f3610-472f-41ac-9d8d-7c94ce6b3b1c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.720295 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.721054 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.723000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztnzh\" (UniqueName: \"kubernetes.io/projected/e81c2eec-e611-4338-abe6-50e0551b3e44-kube-api-access-ztnzh\") pod \"ingress-canary-cs5nv\" (UID: \"e81c2eec-e611-4338-abe6-50e0551b3e44\") " pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.726386 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:15 crc kubenswrapper[4770]: W0126 18:44:15.731888 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ca31d9_c0f7_4bb1_8309_5481cefb40bd.slice/crio-da27af8ad26102dada790c977ef854633c444ee42fa6f54069465c5668b474ff WatchSource:0}: Error finding container da27af8ad26102dada790c977ef854633c444ee42fa6f54069465c5668b474ff: Status 404 returned error can't find the container with id da27af8ad26102dada790c977ef854633c444ee42fa6f54069465c5668b474ff Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.734314 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.734542 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.734785 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.234765708 +0000 UTC m=+140.799672440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.741339 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkng\" (UniqueName: \"kubernetes.io/projected/a053962a-e909-45aa-8514-8eab47372fcb-kube-api-access-fpkng\") pod \"service-ca-operator-777779d784-n5rlf\" (UID: \"a053962a-e909-45aa-8514-8eab47372fcb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.745281 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6tq\" (UniqueName: \"kubernetes.io/projected/1237eddc-c2bc-417f-a757-79ec35624f0b-kube-api-access-pq6tq\") pod \"dns-operator-744455d44c-l7hgt\" (UID: \"1237eddc-c2bc-417f-a757-79ec35624f0b\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.750275 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.762031 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.764573 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-642qv\" (UniqueName: \"kubernetes.io/projected/46410587-7603-41b1-8312-712aa74947ae-kube-api-access-642qv\") pod \"machine-config-operator-74547568cd-rhrt5\" (UID: \"46410587-7603-41b1-8312-712aa74947ae\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.780033 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gzn9s" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.780281 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.794537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhvx\" (UniqueName: \"kubernetes.io/projected/5b7656ad-f68d-4941-ab5a-ff815a47e2b4-kube-api-access-7rhvx\") pod \"dns-default-lb2h8\" (UID: \"5b7656ad-f68d-4941-ab5a-ff815a47e2b4\") " pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.804858 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.809213 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cs5nv" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.822298 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5cf2\" (UniqueName: \"kubernetes.io/projected/1235bb3b-6e40-49b5-bf08-9a8f040587f9-kube-api-access-x5cf2\") pod \"kube-storage-version-migrator-operator-b67b599dd-7lb66\" (UID: \"1235bb3b-6e40-49b5-bf08-9a8f040587f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.836322 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.837557 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.337530903 +0000 UTC m=+140.902437645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.838484 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.839479 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-78h7b"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.839501 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.937914 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:15 crc kubenswrapper[4770]: E0126 18:44:15.938470 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.438396665 +0000 UTC m=+141.003303397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.939235 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjdzl"] Jan 26 18:44:15 crc kubenswrapper[4770]: W0126 18:44:15.941627 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a8f114_013f_4c87_892a_696c15825932.slice/crio-d015ce35a9c313892dbed4fec645b5e1e445f8d0d0cd5c983a8ceaade05b9c73 WatchSource:0}: Error finding container d015ce35a9c313892dbed4fec645b5e1e445f8d0d0cd5c983a8ceaade05b9c73: Status 404 returned error can't find the container with id d015ce35a9c313892dbed4fec645b5e1e445f8d0d0cd5c983a8ceaade05b9c73 Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.962927 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.971191 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lndnr"] Jan 26 18:44:15 crc kubenswrapper[4770]: I0126 18:44:15.971634 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.049097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.057474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.059688 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.060047 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.560023351 +0000 UTC m=+141.124930083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.137747 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.161178 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.161652 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.661262754 +0000 UTC m=+141.226169486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.161724 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.162315 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.662297333 +0000 UTC m=+141.227204065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.175899 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.179499 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpfp2"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.195861 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4q2sd"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.267622 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.267862 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.767830064 +0000 UTC m=+141.332736796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.268440 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.268858 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.768842602 +0000 UTC m=+141.333749334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.370163 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.370394 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.870379392 +0000 UTC m=+141.435286124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.370514 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.370789 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.870782223 +0000 UTC m=+141.435688955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.472269 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.472475 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.972435087 +0000 UTC m=+141.537341819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.477996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.478602 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:16.978582607 +0000 UTC m=+141.543489339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.580363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.581477 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.081451405 +0000 UTC m=+141.646358137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.582288 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.583043 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.083027288 +0000 UTC m=+141.647934020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.653311 4770 csr.go:261] certificate signing request csr-vvctg is approved, waiting to be issued Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.664353 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e0fe412-7289-4f74-8039-b436ebac13e6" containerID="74059ddf45f38fe4827ecb2c4a8698bffecea14d4fc42c9de132173697a2b429" exitCode=0 Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.664418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" event={"ID":"6e0fe412-7289-4f74-8039-b436ebac13e6","Type":"ContainerDied","Data":"74059ddf45f38fe4827ecb2c4a8698bffecea14d4fc42c9de132173697a2b429"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.678214 4770 csr.go:257] certificate signing request csr-vvctg is issued Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.686506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.687211 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.187187591 +0000 UTC m=+141.752094323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.704409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" event={"ID":"d943bce1-c743-4eea-99b2-e38c69a22211","Type":"ContainerStarted","Data":"ad8fee55caef18004e67beaf38e354023012d29631ebec86bec02c2f6e9e8975"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.713228 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.715988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" event={"ID":"650860ca-e588-4148-b22f-1f4e7ba16b2d","Type":"ContainerStarted","Data":"55021c1b51289e45513865ab092f454435f70cb6611450b9345492ecfce1769f"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.735993 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.737296 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tl5vr" event={"ID":"7bd0341c-5414-42a6-988e-b05c09a2c5c8","Type":"ContainerStarted","Data":"d67b515cda5084dd39156553e39bc679655093c393dc1b754e8bf69ce59ce7a2"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.800182 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" event={"ID":"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd","Type":"ContainerStarted","Data":"da27af8ad26102dada790c977ef854633c444ee42fa6f54069465c5668b474ff"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.820979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.823677 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.323659309 +0000 UTC m=+141.888566041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.836218 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" event={"ID":"2a7b733e-ad98-408b-a125-1e4f0953dafa","Type":"ContainerStarted","Data":"6bf373aca43c8e8e9b6e490bdd636bc5203cfd05629ea93276ac2ad8489bc270"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.836273 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" event={"ID":"2a7b733e-ad98-408b-a125-1e4f0953dafa","Type":"ContainerStarted","Data":"98d4fa23cfe49dc1046e35f69e6ba65ace2b7d7b0d122912b989d436968bc3a9"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.848141 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" event={"ID":"8a0fb56c-a92c-4b40-bac2-a8cd958035f0","Type":"ContainerStarted","Data":"f7de2c1887f107a8d781d56cd36e817a79b5d619472d803aedfc8548cd4b34a4"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.923173 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:16 crc kubenswrapper[4770]: E0126 18:44:16.923555 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.423541274 +0000 UTC m=+141.988448006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.924009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gzn9s" event={"ID":"ca71a19b-a881-4fe9-b826-0814de7abe2b","Type":"ContainerStarted","Data":"0ab92a4dd48cf921456e1e03033595221da7ccbf89c66a00548fc9de25e8dd48"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.949565 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j"] Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.963759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" event={"ID":"cc59d647-0338-4bd2-a850-3e2ede6fa766","Type":"ContainerStarted","Data":"9bbc0809cef035fa99e3bdf98807de575e6e5bdd839d25443641878b788c5e6a"} Jan 26 18:44:16 crc kubenswrapper[4770]: I0126 18:44:16.983691 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:16.987860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" event={"ID":"69397d9a-26a6-4ce7-806b-59fca2691a73","Type":"ContainerStarted","Data":"db03a64627b4218694117313609f9cbc0e52b09f942ecf2fdf6036087f2f8a01"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:16.999324 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" event={"ID":"c2e69bd3-7fa0-4687-9588-33fd56627615","Type":"ContainerStarted","Data":"ef08db4321ee11f75f7e83fef8591157ee6689f577163a2d88d8070676d7d5d8"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.018338 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" event={"ID":"76233aff-879a-4848-8f11-b75d2fa524b5","Type":"ContainerStarted","Data":"734513c306cfc1f3b15e641edcdc964da191c2a8b45e9a8c7a61bc52ad1551c5"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.026047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.026623 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.526607307 +0000 UTC m=+142.091514029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.067609 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" event={"ID":"95c195e6-53d6-46c5-bc06-f084727fec7b","Type":"ContainerStarted","Data":"d99385d3c6b300fbba9105974b583e79e95acc6752f35d8656d955029178591c"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.083230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" event={"ID":"35e8bd20-c06e-486c-b8c7-0e60df48448b","Type":"ContainerStarted","Data":"dffb2522dcb1d3ca23d84adb3e62a2ad8b2aa799e70663b77b6e20e7bb2114b0"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.084966 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" podStartSLOduration=120.084950791 podStartE2EDuration="2m0.084950791s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.083740129 +0000 UTC m=+141.648646871" watchObservedRunningTime="2026-01-26 18:44:17.084950791 +0000 UTC m=+141.649857513" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.085070 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pdg7h" podStartSLOduration=121.085065846 podStartE2EDuration="2m1.085065846s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.018833312 +0000 UTC m=+141.583740044" watchObservedRunningTime="2026-01-26 18:44:17.085065846 +0000 UTC m=+141.649972578" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.126671 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.127101 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.627086899 +0000 UTC m=+142.191993631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.174424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" event={"ID":"d9c6078e-9f01-4aab-adff-db90e6ddedfe","Type":"ContainerStarted","Data":"1a804c3a52096be7231d5183ae53c6687f3bd86d32eee2e7d235022f0cc978dc"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.185350 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-64f7c" podStartSLOduration=121.18532668 podStartE2EDuration="2m1.18532668s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.122279095 +0000 UTC m=+141.687185827" watchObservedRunningTime="2026-01-26 18:44:17.18532668 +0000 UTC m=+141.750233412" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.185947 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.235897 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnn7h" event={"ID":"6fff6531-8ffa-478f-977b-a9daf12938fe","Type":"ContainerStarted","Data":"0ce0809e081529ba3d2b3763de1264a44597b5c74d88970b612ecb4ac878bf27"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.235952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jnn7h" event={"ID":"6fff6531-8ffa-478f-977b-a9daf12938fe","Type":"ContainerStarted","Data":"fb7044ebbed8ffad707679374153a0f54b61b7b4655bce7fedc319b5e9337f8a"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.237072 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.239384 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.239917 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.739902801 +0000 UTC m=+142.304809533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.271912 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnn7h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.271970 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnn7h" podUID="6fff6531-8ffa-478f-977b-a9daf12938fe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.287617 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" event={"ID":"65b0fb1c-f1ee-475d-9c5c-55f66744622f","Type":"ContainerStarted","Data":"b5ecbc0535fddc9d809079cb38bfc2806add1f6d0dd4373fc31f7a26b3ba1dcb"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.288238 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.294302 4770 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2b2nm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.294394 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.304996 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" event={"ID":"98a8f114-013f-4c87-892a-696c15825932","Type":"ContainerStarted","Data":"d015ce35a9c313892dbed4fec645b5e1e445f8d0d0cd5c983a8ceaade05b9c73"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.340867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" event={"ID":"a59b659e-3cc4-4463-9499-dfd40eec1d47","Type":"ContainerStarted","Data":"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.341849 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.342202 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.343581 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:17.84356693 +0000 UTC m=+142.408473662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.356586 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.370861 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.448869 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" event={"ID":"4cd4eed4-e59b-4987-936a-b880b81311a1","Type":"ContainerStarted","Data":"1d0ddf6ff08dbfe99dae8577871ef1194e94e8f7bae0713becc5ff162e35d045"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.457101 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" event={"ID":"f75e3ecf-a603-443e-b93c-6f1ca0407fec","Type":"ContainerStarted","Data":"b43f727c5ba9238fca9ad22037ea89aa15b1c150a82b807bb1ae2e369fd15081"} Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.469803 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.469891 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.503616 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-82pv2"] Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.505575 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.005563085 +0000 UTC m=+142.570469817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.624177 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.625568 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.125545847 +0000 UTC m=+142.690452579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.672225 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.682217 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 18:39:16 +0000 UTC, rotation deadline is 2026-12-13 15:56:32.091080612 +0000 UTC Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.682311 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7701h12m14.408771485s for next certificate rotation Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.717490 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.721825 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lb2h8"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.725537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.725915 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.225899844 +0000 UTC m=+142.790806586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.743151 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.760893 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.762878 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6"] Jan 26 18:44:17 crc kubenswrapper[4770]: W0126 18:44:17.775075 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23cf8f72_83fa_451e_afe9_08b8377f969d.slice/crio-a6854efd67e2e4f0767713302ca232e236a4bd53560f7d6dac8c1c05cd927e8d WatchSource:0}: Error finding container a6854efd67e2e4f0767713302ca232e236a4bd53560f7d6dac8c1c05cd927e8d: Status 404 returned error can't find the container with id a6854efd67e2e4f0767713302ca232e236a4bd53560f7d6dac8c1c05cd927e8d Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.795354 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" podStartSLOduration=121.795335156 podStartE2EDuration="2m1.795335156s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.785872084 +0000 UTC m=+142.350778816" watchObservedRunningTime="2026-01-26 18:44:17.795335156 +0000 UTC m=+142.360241888" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.811059 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66"] Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.830405 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.830819 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.330803218 +0000 UTC m=+142.895709950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.927883 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" podStartSLOduration=120.927856515 podStartE2EDuration="2m0.927856515s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.874671863 +0000 UTC m=+142.439578595" watchObservedRunningTime="2026-01-26 18:44:17.927856515 +0000 UTC m=+142.492763247" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.930636 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jnn7h" podStartSLOduration=121.930621 podStartE2EDuration="2m1.930621s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.924970434 +0000 UTC m=+142.489877166" watchObservedRunningTime="2026-01-26 18:44:17.930621 +0000 UTC m=+142.495527732" Jan 26 18:44:17 crc kubenswrapper[4770]: I0126 18:44:17.934205 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:17 crc kubenswrapper[4770]: E0126 18:44:17.934740 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.434726904 +0000 UTC m=+142.999633636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.023105 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bmthj" podStartSLOduration=122.023090171 podStartE2EDuration="2m2.023090171s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.022616937 +0000 UTC m=+142.587523659" watchObservedRunningTime="2026-01-26 18:44:18.023090171 +0000 UTC m=+142.587996903" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.024586 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" podStartSLOduration=121.024578672 podStartE2EDuration="2m1.024578672s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:17.975335638 +0000 UTC m=+142.540242370" watchObservedRunningTime="2026-01-26 18:44:18.024578672 +0000 UTC m=+142.589485404" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.037095 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.044457 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.544425241 +0000 UTC m=+143.109331963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.057283 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cs5nv"] Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.079864 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5"] Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.086211 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf"] Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.092575 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7hgt"] Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.145115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.145462 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.645449377 +0000 UTC m=+143.210356109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.245917 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.246924 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.746904516 +0000 UTC m=+143.311811258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.348119 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.348501 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.848488658 +0000 UTC m=+143.413395390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.450946 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.451108 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.951078347 +0000 UTC m=+143.515985079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.452952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.453471 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:18.953454453 +0000 UTC m=+143.518361185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.520959 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" event={"ID":"29c959ef-d865-49e3-af00-eef8726e6cb2","Type":"ContainerStarted","Data":"7778982718df4739b1bbc311d6c5959855a7c7380ced0ca8379a017da6429360"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.521003 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" event={"ID":"29c959ef-d865-49e3-af00-eef8726e6cb2","Type":"ContainerStarted","Data":"4533c3de8408e7be6a45c59ac91a58b96916c01c57165fa8d0d7fd8ebd1ffc42"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.525066 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" event={"ID":"1235bb3b-6e40-49b5-bf08-9a8f040587f9","Type":"ContainerStarted","Data":"4bc29bcbdc557d52d77147473ba8ccd0484f0bb6d3c2b52c47c1bd8e3b09f8b0"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.545237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tl5vr" event={"ID":"7bd0341c-5414-42a6-988e-b05c09a2c5c8","Type":"ContainerStarted","Data":"f395f7023d565f9a7ecc4cea835b3a0f36398a586559d9d584fccc4e6246a0a6"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.554255 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.556019 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.055995082 +0000 UTC m=+143.620901804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.563439 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" event={"ID":"76233aff-879a-4848-8f11-b75d2fa524b5","Type":"ContainerStarted","Data":"72c5fb575786c8f020f7776bbcc0432dedef6aa5314f01beb65e003aa801a6b6"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.570493 4770 generic.go:334] "Generic (PLEG): container finished" podID="35e8bd20-c06e-486c-b8c7-0e60df48448b" containerID="fc0add9656544f0862702f9479e23016705d830d797ca32df598e472b5bad133" exitCode=0 Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.571032 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" event={"ID":"35e8bd20-c06e-486c-b8c7-0e60df48448b","Type":"ContainerDied","Data":"fc0add9656544f0862702f9479e23016705d830d797ca32df598e472b5bad133"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.576324 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" event={"ID":"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b","Type":"ContainerStarted","Data":"75e55c9bd14c88f8eddb7a14d00ad8cd2f64a494f47bbbb8c6a0831962cb623d"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.582927 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tl5vr" podStartSLOduration=122.582897076 podStartE2EDuration="2m2.582897076s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.579869862 +0000 UTC m=+143.144776594" watchObservedRunningTime="2026-01-26 18:44:18.582897076 +0000 UTC m=+143.147803808" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.588601 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zm2q9" event={"ID":"4cd4eed4-e59b-4987-936a-b880b81311a1","Type":"ContainerStarted","Data":"86cbe07a1d544e17f20ba3ddeaa32813c5accd51a29aa21b8e6fdbcde63a9a5d"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.592654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" event={"ID":"46410587-7603-41b1-8312-712aa74947ae","Type":"ContainerStarted","Data":"85b2f36cf7c722ed9cf617f2b92cc5564b154e141b7f4c55b6d01fcc9e2a26ca"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.595161 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" event={"ID":"7fd51ff1-6cc0-45ad-aa7a-44a777720efd","Type":"ContainerStarted","Data":"4f5eff3290bbf66962255f7621c46c85a0ed8ea17e3004e1e897777fcb161573"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.595256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" event={"ID":"7fd51ff1-6cc0-45ad-aa7a-44a777720efd","Type":"ContainerStarted","Data":"6e60d33ab986ce7d0efb5d1019958e1023e93a2656c80e0db63a31b75d6de2c0"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.596524 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.601341 4770 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mskpv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.601427 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" podUID="7fd51ff1-6cc0-45ad-aa7a-44a777720efd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.605294 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerStarted","Data":"5af3b8ef4b481dce41ead5eab6c1eaf5581978fa501857273d9af283e51d26f3"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.619480 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gzn9s" event={"ID":"ca71a19b-a881-4fe9-b826-0814de7abe2b","Type":"ContainerStarted","Data":"6bbeeab5cb09777a4b829d5f8aaafb8a96c7f68463375a0867109afebb836186"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.641483 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" event={"ID":"69397d9a-26a6-4ce7-806b-59fca2691a73","Type":"ContainerStarted","Data":"8499c1f438e4ef69a8c456dbe5d34b32c704468564648e2cd16090427941d652"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.659863 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g9bgr" podStartSLOduration=121.659828496 podStartE2EDuration="2m1.659828496s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.658622892 +0000 UTC m=+143.223529624" watchObservedRunningTime="2026-01-26 18:44:18.659828496 +0000 UTC m=+143.224735228" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.662651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.662925 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.162916441 +0000 UTC m=+143.727823173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.677848 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" event={"ID":"b5e9cb7f-e595-4a56-928f-691fdb1c93f2","Type":"ContainerStarted","Data":"2200a0d46acafcb2635e9994551c96366d36b73896ebda4c5ae8b7dadd29c234"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.696386 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qzkc" event={"ID":"d6fd3922-5ed0-4e60-9db5-94eb263b410b","Type":"ContainerStarted","Data":"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.696428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qzkc" event={"ID":"d6fd3922-5ed0-4e60-9db5-94eb263b410b","Type":"ContainerStarted","Data":"e57583efc2aa7fba7871aec5b64d90d294fdb82da8595802a1c8868ec358b7e2"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.741851 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gzn9s" podStartSLOduration=7.741829916 podStartE2EDuration="7.741829916s" podCreationTimestamp="2026-01-26 18:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.689294791 +0000 UTC m=+143.254201513" watchObservedRunningTime="2026-01-26 18:44:18.741829916 +0000 UTC m=+143.306736648" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.742958 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.743904 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-v5lcr" podStartSLOduration=121.743893613 podStartE2EDuration="2m1.743893613s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.74126817 +0000 UTC m=+143.306174892" watchObservedRunningTime="2026-01-26 18:44:18.743893613 +0000 UTC m=+143.308800355" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.754247 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:18 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:18 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:18 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.754299 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.761550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" event={"ID":"6e0fe412-7289-4f74-8039-b436ebac13e6","Type":"ContainerStarted","Data":"b923c8a95b41c216ce92491de058d2c3c1381ef4ca741630f1d98f317bd9bd3f"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.764126 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.767167 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.267138476 +0000 UTC m=+143.832045238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.783872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" event={"ID":"a053962a-e909-45aa-8514-8eab47372fcb","Type":"ContainerStarted","Data":"22791e6e4f7529ca60c6d00e59b4ce9a178900652c27ce8c57403dabf04d5b87"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.802787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" event={"ID":"8a0fb56c-a92c-4b40-bac2-a8cd958035f0","Type":"ContainerStarted","Data":"9ee0abbf37de8a1ff5dcbd3441ed27f7c8c5265e6dd26a5721ee1d815d93bc0a"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.814769 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.815545 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" podStartSLOduration=121.815521675 podStartE2EDuration="2m1.815521675s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.779574101 +0000 UTC m=+143.344480833" watchObservedRunningTime="2026-01-26 18:44:18.815521675 +0000 UTC m=+143.380428397" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.816155 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" podStartSLOduration=121.816150823 podStartE2EDuration="2m1.816150823s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.812842862 +0000 UTC m=+143.377749594" watchObservedRunningTime="2026-01-26 18:44:18.816150823 +0000 UTC m=+143.381057555" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.850520 4770 patch_prober.go:28] interesting pod/console-operator-58897d9998-hpfp2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.850696 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" podUID="8a0fb56c-a92c-4b40-bac2-a8cd958035f0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.854713 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" podStartSLOduration=121.854675899 podStartE2EDuration="2m1.854675899s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.853126296 +0000 UTC m=+143.418033028" watchObservedRunningTime="2026-01-26 18:44:18.854675899 +0000 UTC m=+143.419582631" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.863832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" event={"ID":"65b0fb1c-f1ee-475d-9c5c-55f66744622f","Type":"ContainerStarted","Data":"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.865450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.868683 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" event={"ID":"356f3610-472f-41ac-9d8d-7c94ce6b3b1c","Type":"ContainerStarted","Data":"ac88ea2920465ffb8f344cc50952b84cb1f764649454c268fb44d0a5e6fe2853"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.880926 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:44:18 crc kubenswrapper[4770]: E0126 18:44:18.881409 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.381387429 +0000 UTC m=+143.946294161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.947323 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5qzkc" podStartSLOduration=122.947296714 podStartE2EDuration="2m2.947296714s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:18.888090894 +0000 UTC m=+143.452997626" watchObservedRunningTime="2026-01-26 18:44:18.947296714 +0000 UTC m=+143.512203446" Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.956682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lb2h8" event={"ID":"5b7656ad-f68d-4941-ab5a-ff815a47e2b4","Type":"ContainerStarted","Data":"b0581190c5992fb17ae2fa699f641993b90ae008d704d0f3ae8fece9adf25a32"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.962153 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" event={"ID":"1237eddc-c2bc-417f-a757-79ec35624f0b","Type":"ContainerStarted","Data":"91b43de9b1bd7794d194b75489548544cc3a80ff26e71b02a7e68447fa49a7a6"} Jan 26 18:44:18 crc kubenswrapper[4770]: I0126 18:44:18.979569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" event={"ID":"c2e69bd3-7fa0-4687-9588-33fd56627615","Type":"ContainerStarted","Data":"e8cb495331565095eb0e8cc57f1081d205bbf6a805b6c482229b15920a99b439"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.043217 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.044244 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.544229757 +0000 UTC m=+144.109136479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.059113 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" event={"ID":"23cf8f72-83fa-451e-afe9-08b8377f969d","Type":"ContainerStarted","Data":"a6854efd67e2e4f0767713302ca232e236a4bd53560f7d6dac8c1c05cd927e8d"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.069675 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" event={"ID":"c99112e4-bf15-412c-89dd-a68b4bd43dd5","Type":"ContainerStarted","Data":"053c2fdfbeef62642b578d4e70b6f4f9d45ab589ce2dc90fbb864da580ff79ce"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.069745 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" event={"ID":"c99112e4-bf15-412c-89dd-a68b4bd43dd5","Type":"ContainerStarted","Data":"a34bd003dbfe27a748b6ef6b937a0232b82dc5c4ea16658e1a6171dd82e6443a"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.109610 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" podStartSLOduration=123.109563615 podStartE2EDuration="2m3.109563615s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.021692513 +0000 UTC m=+143.586599275" watchObservedRunningTime="2026-01-26 18:44:19.109563615 +0000 UTC m=+143.674470347" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.112749 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" event={"ID":"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15","Type":"ContainerStarted","Data":"f3ed6ccb3cbf66e55bac805b733244d1deba649d2d78e68b816f7c4fd39249e2"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.113524 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.135598 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" podStartSLOduration=123.135540484 podStartE2EDuration="2m3.135540484s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.098380386 +0000 UTC m=+143.663287118" watchObservedRunningTime="2026-01-26 18:44:19.135540484 +0000 UTC m=+143.700447226" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.145761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.146447 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.646414255 +0000 UTC m=+144.211320987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.147040 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zszln container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.147089 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" podUID="cf831fd5-2de3-4d8e-8c93-2dadcdb72e15" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.152140 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" event={"ID":"4f522286-ca46-4767-8813-5d5079d1d108","Type":"ContainerStarted","Data":"a96ac72754c090e9ec4985329a19a2171726f59da8047bcdb684f4d8962baeb0"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.163811 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" event={"ID":"c9ca31d9-c0f7-4bb1-8309-5481cefb40bd","Type":"ContainerStarted","Data":"c51c2f01490794556e0ba5fce51d390231d33df00b55b605ea15c94676c7c02d"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.166392 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" event={"ID":"3c946c9b-8aca-4750-a9df-9bde5608a7cf","Type":"ContainerStarted","Data":"346111f321d85c7aafba7a0ee96053d348f2d77fa0c89634a4d0cf06e8290aeb"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.167277 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.179180 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" event={"ID":"ecc3859c-a7f3-4828-b58a-01b4570f0f7a","Type":"ContainerStarted","Data":"2ca428aedf12be36b245200b1e45885d8a30a6a10f84e24f6aec85de9c1cbca0"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.179277 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" event={"ID":"ecc3859c-a7f3-4828-b58a-01b4570f0f7a","Type":"ContainerStarted","Data":"bc664cdb79c2a88581cc1ce38d410b057b091a072a79ea91fdb2d308e88d4a09"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.189125 4770 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8sn2b container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.189196 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" podUID="3c946c9b-8aca-4750-a9df-9bde5608a7cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.210321 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4d6cp" podStartSLOduration=123.210301813 podStartE2EDuration="2m3.210301813s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.210112838 +0000 UTC m=+143.775019580" watchObservedRunningTime="2026-01-26 18:44:19.210301813 +0000 UTC m=+143.775208545" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.211174 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" podStartSLOduration=122.211166767 podStartE2EDuration="2m2.211166767s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.139127244 +0000 UTC m=+143.704033976" watchObservedRunningTime="2026-01-26 18:44:19.211166767 +0000 UTC m=+143.776073499" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.215009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cs5nv" event={"ID":"e81c2eec-e611-4338-abe6-50e0551b3e44","Type":"ContainerStarted","Data":"72ece56ea741b0e66d5b139716107266634abcea65b3f4b37f3b9a81673ae37a"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.246605 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.248473 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.74844373 +0000 UTC m=+144.313350492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.267681 4770 generic.go:334] "Generic (PLEG): container finished" podID="650860ca-e588-4148-b22f-1f4e7ba16b2d" containerID="fdd78593679cb19d520756d98958d88d8d77f2a1fcf9c856ff5d76f0e644f11e" exitCode=0 Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.267798 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" event={"ID":"650860ca-e588-4148-b22f-1f4e7ba16b2d","Type":"ContainerDied","Data":"fdd78593679cb19d520756d98958d88d8d77f2a1fcf9c856ff5d76f0e644f11e"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.269196 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-82pv2" podStartSLOduration=122.269185243 podStartE2EDuration="2m2.269185243s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.26725239 +0000 UTC m=+143.832159132" watchObservedRunningTime="2026-01-26 18:44:19.269185243 +0000 UTC m=+143.834091975" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.285036 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" event={"ID":"98a8f114-013f-4c87-892a-696c15825932","Type":"ContainerStarted","Data":"b02376696b5ef9bf1295214f658e3442e1db2708940e1d9ff6ae92f3cddd88d9"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.315039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" event={"ID":"95c195e6-53d6-46c5-bc06-f084727fec7b","Type":"ContainerStarted","Data":"5380eb3d6563ecd90e127ae00cadd54b2b0bcb849b6c0a509994595858d6a5a4"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.336672 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" podStartSLOduration=122.336658081 podStartE2EDuration="2m2.336658081s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.334632435 +0000 UTC m=+143.899539167" watchObservedRunningTime="2026-01-26 18:44:19.336658081 +0000 UTC m=+143.901564813" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.342658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" event={"ID":"cc59d647-0338-4bd2-a850-3e2ede6fa766","Type":"ContainerStarted","Data":"bbc8c5535a7b83cc0d10e8d4172014d194bb77f99e226712a777e68ceb805ea7"} Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.344930 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnn7h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.345061 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnn7h" podUID="6fff6531-8ffa-478f-977b-a9daf12938fe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.350517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.351118 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.851103841 +0000 UTC m=+144.416010573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.377539 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" podStartSLOduration=122.377525963 podStartE2EDuration="2m2.377525963s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.376660788 +0000 UTC m=+143.941567520" watchObservedRunningTime="2026-01-26 18:44:19.377525963 +0000 UTC m=+143.942432695" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.455207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.456728 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.956698124 +0000 UTC m=+144.521604856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.456783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.462517 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:19.962499835 +0000 UTC m=+144.527406617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.535228 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m22qb" podStartSLOduration=123.535211477 podStartE2EDuration="2m3.535211477s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.527635468 +0000 UTC m=+144.092542200" watchObservedRunningTime="2026-01-26 18:44:19.535211477 +0000 UTC m=+144.100118209" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.557678 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.558246 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.058217344 +0000 UTC m=+144.623124066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.659718 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.660623 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.160603838 +0000 UTC m=+144.725510570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.755993 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:19 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:19 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:19 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.756057 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.762757 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.763054 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.263041074 +0000 UTC m=+144.827947806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.864446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.864904 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.364890383 +0000 UTC m=+144.929797115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:19 crc kubenswrapper[4770]: I0126 18:44:19.965871 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:19 crc kubenswrapper[4770]: E0126 18:44:19.966271 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.466252619 +0000 UTC m=+145.031159351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.078393 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.104840 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.604813714 +0000 UTC m=+145.169720446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.184474 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.185295 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.685274642 +0000 UTC m=+145.250181374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.286329 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.286734 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.786720429 +0000 UTC m=+145.351627161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.373262 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" event={"ID":"f75e3ecf-a603-443e-b93c-6f1ca0407fec","Type":"ContainerStarted","Data":"099fad3eac8078129d4056b89be17912563f8315daf764088d149558dbda7de7"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.380524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerStarted","Data":"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.381933 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.387598 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.387919 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.887905091 +0000 UTC m=+145.452811823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.394863 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-24pqv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.394938 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.418031 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qc9kl" podStartSLOduration=124.418013414 podStartE2EDuration="2m4.418013414s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:19.612204798 +0000 UTC m=+144.177111530" watchObservedRunningTime="2026-01-26 18:44:20.418013414 +0000 UTC m=+144.982920146" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.421385 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" event={"ID":"4f522286-ca46-4767-8813-5d5079d1d108","Type":"ContainerStarted","Data":"21ccee250a950ad37e11338e087fd80099a0b3853132d852662373fe27eee700"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.421433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" event={"ID":"4f522286-ca46-4767-8813-5d5079d1d108","Type":"ContainerStarted","Data":"95cb7612ed3acd6fabdaeea8f165747ea02350d26e7a223a4aeb06d9343284ef"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.421483 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.428587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" event={"ID":"3c946c9b-8aca-4750-a9df-9bde5608a7cf","Type":"ContainerStarted","Data":"6df5a4f011225e29bd553daef516f5bb89d675e0a23b97c796f17c3af90ddaa2"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.442813 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8sn2b" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.446429 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" podStartSLOduration=123.4464081 podStartE2EDuration="2m3.4464081s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.423409474 +0000 UTC m=+144.988316206" watchObservedRunningTime="2026-01-26 18:44:20.4464081 +0000 UTC m=+145.011314832" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.447442 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" podStartSLOduration=123.447433358 podStartE2EDuration="2m3.447433358s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.445406883 +0000 UTC m=+145.010313615" watchObservedRunningTime="2026-01-26 18:44:20.447433358 +0000 UTC m=+145.012340090" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.452469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5v997" event={"ID":"b5e9cb7f-e595-4a56-928f-691fdb1c93f2","Type":"ContainerStarted","Data":"c68c5c630fedb0791bc2b4f2c29a5ac18e04dba184e3aad9489bf4ad3f90089a"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.464319 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lb2h8" event={"ID":"5b7656ad-f68d-4941-ab5a-ff815a47e2b4","Type":"ContainerStarted","Data":"bebb38e36d9c62bb8aee6c61234062abec2d71e2358c9a6b7b04b0fa17e2f5f7"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.466470 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" event={"ID":"29c959ef-d865-49e3-af00-eef8726e6cb2","Type":"ContainerStarted","Data":"1db404238b497adaa0939611982e0ef88067248e558ca7b896c6ee81890a86ba"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.497079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.498592 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:20.998576344 +0000 UTC m=+145.563483136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.503775 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cs5nv" event={"ID":"e81c2eec-e611-4338-abe6-50e0551b3e44","Type":"ContainerStarted","Data":"eea81a077d679984b5394a3e31d1d42f623624cc1b554deae923e1062aaaf4c9"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.505904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" event={"ID":"1235bb3b-6e40-49b5-bf08-9a8f040587f9","Type":"ContainerStarted","Data":"bbfb0a5dbafd7e7795fe5c842e2160ec8bfcc5dc2a1b48ed51523e8122d1fe14"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.511451 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wdd8j" podStartSLOduration=123.51143356 podStartE2EDuration="2m3.51143356s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.508603562 +0000 UTC m=+145.073510284" watchObservedRunningTime="2026-01-26 18:44:20.51143356 +0000 UTC m=+145.076340292" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.535195 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" event={"ID":"650860ca-e588-4148-b22f-1f4e7ba16b2d","Type":"ContainerStarted","Data":"d0c523a40002a6b6734e4d3ed480951cc9d0479ad76ad49d4dfbce79744b2530"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.538517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" event={"ID":"35e8bd20-c06e-486c-b8c7-0e60df48448b","Type":"ContainerStarted","Data":"a0d8c896c1e88342d7c6c0e6f5b99bed82b5130d5e55cd4acefa8935bf9c769d"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.539231 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.545815 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7lb66" podStartSLOduration=123.545796731 podStartE2EDuration="2m3.545796731s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.545122463 +0000 UTC m=+145.110029195" watchObservedRunningTime="2026-01-26 18:44:20.545796731 +0000 UTC m=+145.110703463" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.557134 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" event={"ID":"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b","Type":"ContainerStarted","Data":"7c600f4b4d742b21e6c08e9c19e58a4ed3e1c0eea707b9f69a4674ea92c05797"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.557207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" event={"ID":"8f1ef4aa-d364-4658-8e8a-cd473fcaf81b","Type":"ContainerStarted","Data":"6ee81298c3f92db6b45fd9b0dedc4cc7d786c1d83eb37ba3b7babc690e0050ee"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.572728 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cs5nv" podStartSLOduration=8.572711946 podStartE2EDuration="8.572711946s" podCreationTimestamp="2026-01-26 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.572473459 +0000 UTC m=+145.137380191" watchObservedRunningTime="2026-01-26 18:44:20.572711946 +0000 UTC m=+145.137618668" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.589886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" event={"ID":"356f3610-472f-41ac-9d8d-7c94ce6b3b1c","Type":"ContainerStarted","Data":"18a5f57278669abb168236448763198020796af1df5f67c322446da1190f27fe"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.589929 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" event={"ID":"356f3610-472f-41ac-9d8d-7c94ce6b3b1c","Type":"ContainerStarted","Data":"f94b590cba14fd20e0372e4616b454981e37cce475077d06549ed93ce735f43e"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.601249 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.602224 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.102209633 +0000 UTC m=+145.667116365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.604534 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xx2j2" podStartSLOduration=124.604519046 podStartE2EDuration="2m4.604519046s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.603853598 +0000 UTC m=+145.168760330" watchObservedRunningTime="2026-01-26 18:44:20.604519046 +0000 UTC m=+145.169425778" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.606405 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" event={"ID":"c2e69bd3-7fa0-4687-9588-33fd56627615","Type":"ContainerStarted","Data":"9f49984a4239f930a1c58cd1790714c2bacc5a1f84bac7fbbb2d87a3851d4398"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.619108 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" event={"ID":"23cf8f72-83fa-451e-afe9-08b8377f969d","Type":"ContainerStarted","Data":"e9417738be797e0350eea9acbbf7d9c1b79ab5ba6ce55d523ed220c527b388e1"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.648547 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" podStartSLOduration=124.648530725 podStartE2EDuration="2m4.648530725s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.64763325 +0000 UTC m=+145.212539982" watchObservedRunningTime="2026-01-26 18:44:20.648530725 +0000 UTC m=+145.213437447" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.649040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" event={"ID":"1237eddc-c2bc-417f-a757-79ec35624f0b","Type":"ContainerStarted","Data":"c404605156d02a424d51a70729f19bda86b905987a206109d80da17457741a9b"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.662073 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" event={"ID":"cf831fd5-2de3-4d8e-8c93-2dadcdb72e15","Type":"ContainerStarted","Data":"7df365de5c7da589fb4c64d01312b97276ef8bc621672ed1677486b773ba7dfa"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.675645 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zszln" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.688251 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" event={"ID":"a053962a-e909-45aa-8514-8eab47372fcb","Type":"ContainerStarted","Data":"26f450b5aba5f3e789f73709462b2833546252bd144b15501a2ec56b36633ca9"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.696885 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dz75h" podStartSLOduration=123.696871933 podStartE2EDuration="2m3.696871933s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.694981231 +0000 UTC m=+145.259887963" watchObservedRunningTime="2026-01-26 18:44:20.696871933 +0000 UTC m=+145.261778665" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.696971 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xdmd6" podStartSLOduration=124.696967175 podStartE2EDuration="2m4.696967175s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.674925305 +0000 UTC m=+145.239832037" watchObservedRunningTime="2026-01-26 18:44:20.696967175 +0000 UTC m=+145.261873907" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.703176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.708281 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nxckq" event={"ID":"98a8f114-013f-4c87-892a-696c15825932","Type":"ContainerStarted","Data":"208cf5c4cf95809484c9ee434fa7456d15458f28cf04947f01f10228ec28d64a"} Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.708822 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.208804334 +0000 UTC m=+145.773711066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.739818 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:20 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:20 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:20 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.739861 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.744931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" event={"ID":"46410587-7603-41b1-8312-712aa74947ae","Type":"ContainerStarted","Data":"0d9e3a86b9df74a43178378a772e7097410e4351e8e895ec75f920ec78d3c0d1"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.744968 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" event={"ID":"46410587-7603-41b1-8312-712aa74947ae","Type":"ContainerStarted","Data":"58a1085e0a4a45df8454bb3b380f36eb9b554326a672aac3c967b0e8844e46ad"} Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.758365 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mskpv" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.759402 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjdzl" podStartSLOduration=123.759387634 podStartE2EDuration="2m3.759387634s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.729471545 +0000 UTC m=+145.294378277" watchObservedRunningTime="2026-01-26 18:44:20.759387634 +0000 UTC m=+145.324294366" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.759983 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n5rlf" podStartSLOduration=123.75997935 podStartE2EDuration="2m3.75997935s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.758644703 +0000 UTC m=+145.323551435" watchObservedRunningTime="2026-01-26 18:44:20.75997935 +0000 UTC m=+145.324886082" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.790758 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hpfp2" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.804521 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.805211 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.305187032 +0000 UTC m=+145.870093764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.806818 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhrt5" podStartSLOduration=123.806807156 podStartE2EDuration="2m3.806807156s" podCreationTimestamp="2026-01-26 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:20.805962663 +0000 UTC m=+145.370869395" watchObservedRunningTime="2026-01-26 18:44:20.806807156 +0000 UTC m=+145.371713888" Jan 26 18:44:20 crc kubenswrapper[4770]: I0126 18:44:20.909507 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:20 crc kubenswrapper[4770]: E0126 18:44:20.915387 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.415372991 +0000 UTC m=+145.980279723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.018830 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.019123 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.519107052 +0000 UTC m=+146.084013784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.120580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.121074 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.621056595 +0000 UTC m=+146.185963327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.222154 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.222596 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.722580765 +0000 UTC m=+146.287487497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.325638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.326096 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.82608063 +0000 UTC m=+146.390987362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.326260 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.327292 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.332007 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.341734 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.427078 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.427312 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.427390 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85g72\" (UniqueName: \"kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.427430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.427587 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:21.927569299 +0000 UTC m=+146.492476041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.499426 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.500759 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.505611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.515134 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.528757 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.528808 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.528861 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85g72\" (UniqueName: \"kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.528890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.529383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.529383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.529632 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.029621184 +0000 UTC m=+146.594527916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.556600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85g72\" (UniqueName: \"kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72\") pod \"certified-operators-tp628\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.629462 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.629714 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.129652323 +0000 UTC m=+146.694559045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.629764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.629886 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.629920 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.630098 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.130085296 +0000 UTC m=+146.694992028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.630141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-929kx\" (UniqueName: \"kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.666277 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.714522 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.715421 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.731308 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.731477 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.231455231 +0000 UTC m=+146.796361963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.731575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.731622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.731779 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-929kx\" (UniqueName: \"kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.732068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.732199 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.732490 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.23247683 +0000 UTC m=+146.797383572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.732644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.735854 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.736432 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:21 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:21 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:21 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.736484 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.757475 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-929kx\" (UniqueName: \"kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx\") pod \"community-operators-zbq6m\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.791437 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" event={"ID":"650860ca-e588-4148-b22f-1f4e7ba16b2d","Type":"ContainerStarted","Data":"bc442877c2e455a2de5ddea7dec1072afc96ecbd9645fd9514dad3764bc445fd"} Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.799737 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" event={"ID":"f75e3ecf-a603-443e-b93c-6f1ca0407fec","Type":"ContainerStarted","Data":"db81c211d25f3c2bf294da445a8839e66f1bde2d1a59ff9b910dc041c05a26ac"} Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.806287 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" event={"ID":"1237eddc-c2bc-417f-a757-79ec35624f0b","Type":"ContainerStarted","Data":"9413d33778ed6fcb0e729b2051562dbfcb55cbcc383a00699ff0e5e75075ad1e"} Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.812242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.827102 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" podStartSLOduration=125.827073228 podStartE2EDuration="2m5.827073228s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:21.825838653 +0000 UTC m=+146.390745405" watchObservedRunningTime="2026-01-26 18:44:21.827073228 +0000 UTC m=+146.391979950" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.828420 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lb2h8" event={"ID":"5b7656ad-f68d-4941-ab5a-ff815a47e2b4","Type":"ContainerStarted","Data":"d254b4597805c0c0a0995f63fecd6af71512ae82a9a657c83549da088aa56663"} Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.828469 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.832409 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-24pqv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.832472 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.834324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.834633 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvczs\" (UniqueName: \"kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.834665 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.834824 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.836473 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.336456638 +0000 UTC m=+146.901363370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.877260 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l7hgt" podStartSLOduration=125.877241017 podStartE2EDuration="2m5.877241017s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:21.877120463 +0000 UTC m=+146.442027195" watchObservedRunningTime="2026-01-26 18:44:21.877241017 +0000 UTC m=+146.442147749" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.917119 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.917277 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lb2h8" podStartSLOduration=9.917259985 podStartE2EDuration="9.917259985s" podCreationTimestamp="2026-01-26 18:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:21.912282466 +0000 UTC m=+146.477189198" watchObservedRunningTime="2026-01-26 18:44:21.917259985 +0000 UTC m=+146.482166717" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.918485 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.936197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvczs\" (UniqueName: \"kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.936262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.936349 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.936475 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.937669 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:44:21 crc kubenswrapper[4770]: E0126 18:44:21.940472 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.440452796 +0000 UTC m=+147.005359618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.944938 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.946337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:21 crc kubenswrapper[4770]: I0126 18:44:21.994164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvczs\" (UniqueName: \"kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs\") pod \"certified-operators-qmc66\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.042491 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.043245 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzngv\" (UniqueName: \"kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.043308 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.043332 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.043451 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.543433977 +0000 UTC m=+147.108340709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.055124 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.126858 4770 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.144851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.144891 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzngv\" (UniqueName: \"kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.144918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.144933 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.145375 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.145618 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.645607835 +0000 UTC m=+147.210514567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.146419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.182486 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzngv\" (UniqueName: \"kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv\") pod \"community-operators-2hflw\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.246278 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.246686 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.746666583 +0000 UTC m=+147.311573325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.273458 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.348412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.348711 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.848684317 +0000 UTC m=+147.413591039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.436326 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.449046 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.449217 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.949185149 +0000 UTC m=+147.514091871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.449331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.449803 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:22.949778655 +0000 UTC m=+147.514685387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.528735 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.556880 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.557640 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.05761156 +0000 UTC m=+147.622518292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.558659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.559132 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.059121081 +0000 UTC m=+147.624027813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.604811 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.659671 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.660134 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.160119847 +0000 UTC m=+147.725026579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.668746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:44:22 crc kubenswrapper[4770]: W0126 18:44:22.710056 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d23ddd5_e513_4a80_89ab_28f99522aaa8.slice/crio-1e8a1b52292c97fdd24f0bccaf8cb730a396d7ee2551ef78b40a27019cb08a28 WatchSource:0}: Error finding container 1e8a1b52292c97fdd24f0bccaf8cb730a396d7ee2551ef78b40a27019cb08a28: Status 404 returned error can't find the container with id 1e8a1b52292c97fdd24f0bccaf8cb730a396d7ee2551ef78b40a27019cb08a28 Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.735332 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:22 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:22 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:22 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.735395 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.761075 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.761409 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.261396221 +0000 UTC m=+147.826302943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.846018 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerStarted","Data":"cd905c4689077c61fa0e326635b24a2d6fe28583530927dffdfbd681ebcd43f8"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.846069 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerStarted","Data":"42827e7d731bec4daa40f2480531ca8dc88825d26f509aad8548d5805933cf66"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.848549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" event={"ID":"f75e3ecf-a603-443e-b93c-6f1ca0407fec","Type":"ContainerStarted","Data":"170362a65b8c26c19691731a793f4367eae0cff597452c98df10eb56d7de56fa"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.848597 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" event={"ID":"f75e3ecf-a603-443e-b93c-6f1ca0407fec","Type":"ContainerStarted","Data":"d80cffaec1b5bbb077975ab850e430e5fc91122dfd48c6ae490037ae134aaa09"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.853143 4770 generic.go:334] "Generic (PLEG): container finished" podID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerID="4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad" exitCode=0 Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.853221 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerDied","Data":"4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.853244 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerStarted","Data":"5ca1867aa2293232895859a0c1021af6c42355360ec6f4a1768d7c540f11ace5"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.856095 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.861949 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.862123 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.362098549 +0000 UTC m=+147.927005281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.862338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.862674 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.362659354 +0000 UTC m=+147.927566086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pp4k8" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.868495 4770 generic.go:334] "Generic (PLEG): container finished" podID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerID="ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a" exitCode=0 Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.868570 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerDied","Data":"ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.868613 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerStarted","Data":"ab985e1ac0b282f86e5989a4d5f4e9bcf67c4d2341b54eba908b53a8b58d4470"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.870947 4770 generic.go:334] "Generic (PLEG): container finished" podID="c99112e4-bf15-412c-89dd-a68b4bd43dd5" containerID="053c2fdfbeef62642b578d4e70b6f4f9d45ab589ce2dc90fbb864da580ff79ce" exitCode=0 Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.871030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" event={"ID":"c99112e4-bf15-412c-89dd-a68b4bd43dd5","Type":"ContainerDied","Data":"053c2fdfbeef62642b578d4e70b6f4f9d45ab589ce2dc90fbb864da580ff79ce"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.873071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerStarted","Data":"1e8a1b52292c97fdd24f0bccaf8cb730a396d7ee2551ef78b40a27019cb08a28"} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.873634 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4q2sd" podStartSLOduration=11.873621087 podStartE2EDuration="11.873621087s" podCreationTimestamp="2026-01-26 18:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:22.872136006 +0000 UTC m=+147.437042768" watchObservedRunningTime="2026-01-26 18:44:22.873621087 +0000 UTC m=+147.438527819" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.884585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.898246 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-78h7b" Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.963336 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:22 crc kubenswrapper[4770]: E0126 18:44:22.964805 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 18:44:23.464774811 +0000 UTC m=+148.029681543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.979719 4770 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T18:44:22.126886118Z","Handler":null,"Name":""} Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.990944 4770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 18:44:22 crc kubenswrapper[4770]: I0126 18:44:22.991339 4770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.066198 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.068782 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.068815 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.093334 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pp4k8\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.166919 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.173201 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.301236 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.302319 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: W0126 18:44:23.304949 4770 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: secrets "redhat-marketplace-dockercfg-x2ctb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 26 18:44:23 crc kubenswrapper[4770]: E0126 18:44:23.305012 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-marketplace-dockercfg-x2ctb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.316014 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.369784 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.369846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlr85\" (UniqueName: \"kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.370065 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.395033 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.471123 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.471171 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlr85\" (UniqueName: \"kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.471261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.471783 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.471850 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.493019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlr85\" (UniqueName: \"kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85\") pod \"redhat-marketplace-dskv9\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.558692 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.558753 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.571575 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.605126 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.682772 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.682885 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.682989 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.683043 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.685023 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.687764 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.688570 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.691615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.696606 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.711737 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.713063 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.714741 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.718449 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.719011 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.719740 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.721595 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.725131 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.741002 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.745362 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.746774 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:23 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:23 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:23 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.752458 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.784364 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.787142 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.787338 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.787524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.787681 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.787814 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsntc\" (UniqueName: \"kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.888567 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerDied","Data":"7faf23e8e404a390e4bcdb1fa872780101f871a8b9488baf7f9c91033aa979be"} Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.888889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.888840 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.888930 4770 generic.go:334] "Generic (PLEG): container finished" podID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerID="7faf23e8e404a390e4bcdb1fa872780101f871a8b9488baf7f9c91033aa979be" exitCode=0 Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsntc\" (UniqueName: \"kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889567 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.889603 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.895243 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerID="cd905c4689077c61fa0e326635b24a2d6fe28583530927dffdfbd681ebcd43f8" exitCode=0 Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.895390 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerDied","Data":"cd905c4689077c61fa0e326635b24a2d6fe28583530927dffdfbd681ebcd43f8"} Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.904560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" event={"ID":"7acc36bb-6e6d-40cf-957f-82e0b5c50b59","Type":"ContainerStarted","Data":"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10"} Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.904614 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" event={"ID":"7acc36bb-6e6d-40cf-957f-82e0b5c50b59","Type":"ContainerStarted","Data":"62eab0a33d7018edc5aa99bfe80d3bbae22c6ca9586e727930037158f6f40e50"} Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.904638 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.913463 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsntc\" (UniqueName: \"kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc\") pod \"redhat-marketplace-f9hfx\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.914297 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rj6f7" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.922244 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:23 crc kubenswrapper[4770]: I0126 18:44:23.956139 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" podStartSLOduration=127.956121292 podStartE2EDuration="2m7.956121292s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:23.937692322 +0000 UTC m=+148.502599054" watchObservedRunningTime="2026-01-26 18:44:23.956121292 +0000 UTC m=+148.521028024" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.057765 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.462773 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:24 crc kubenswrapper[4770]: W0126 18:44:24.466030 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-feb5570d56d1e79ecee00885ef406d398bfec8ee4a67f6c86ef5685e73531f5e WatchSource:0}: Error finding container feb5570d56d1e79ecee00885ef406d398bfec8ee4a67f6c86ef5685e73531f5e: Status 404 returned error can't find the container with id feb5570d56d1e79ecee00885ef406d398bfec8ee4a67f6c86ef5685e73531f5e Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.475519 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.510751 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:44:24 crc kubenswrapper[4770]: E0126 18:44:24.511608 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99112e4-bf15-412c-89dd-a68b4bd43dd5" containerName="collect-profiles" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.511622 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99112e4-bf15-412c-89dd-a68b4bd43dd5" containerName="collect-profiles" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.511753 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99112e4-bf15-412c-89dd-a68b4bd43dd5" containerName="collect-profiles" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.513292 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.514444 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.520798 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.575030 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:44:24 crc kubenswrapper[4770]: W0126 18:44:24.578838 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-eac24e24b792c9e91eb43ded0697b89cb84fd347d7c65b252357217059bd28a3 WatchSource:0}: Error finding container eac24e24b792c9e91eb43ded0697b89cb84fd347d7c65b252357217059bd28a3: Status 404 returned error can't find the container with id eac24e24b792c9e91eb43ded0697b89cb84fd347d7c65b252357217059bd28a3 Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.578935 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.583131 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.617950 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume\") pod \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.618002 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume\") pod \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.618108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8kqn\" (UniqueName: \"kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn\") pod \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\" (UID: \"c99112e4-bf15-412c-89dd-a68b4bd43dd5\") " Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.618494 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.618550 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.618582 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2lx\" (UniqueName: \"kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.619429 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume" (OuterVolumeSpecName: "config-volume") pod "c99112e4-bf15-412c-89dd-a68b4bd43dd5" (UID: "c99112e4-bf15-412c-89dd-a68b4bd43dd5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.624265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn" (OuterVolumeSpecName: "kube-api-access-v8kqn") pod "c99112e4-bf15-412c-89dd-a68b4bd43dd5" (UID: "c99112e4-bf15-412c-89dd-a68b4bd43dd5"). InnerVolumeSpecName "kube-api-access-v8kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.624638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c99112e4-bf15-412c-89dd-a68b4bd43dd5" (UID: "c99112e4-bf15-412c-89dd-a68b4bd43dd5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719630 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719696 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719754 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2lx\" (UniqueName: \"kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719829 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8kqn\" (UniqueName: \"kubernetes.io/projected/c99112e4-bf15-412c-89dd-a68b4bd43dd5-kube-api-access-v8kqn\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719842 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99112e4-bf15-412c-89dd-a68b4bd43dd5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.719853 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99112e4-bf15-412c-89dd-a68b4bd43dd5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.720138 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.720434 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.731041 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:24 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:24 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:24 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.731111 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.737516 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2lx\" (UniqueName: \"kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx\") pod \"redhat-operators-2l6x4\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.837644 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.900671 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.902345 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.918010 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.928531 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"eac24e24b792c9e91eb43ded0697b89cb84fd347d7c65b252357217059bd28a3"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.940350 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"635146b4008896658135b01d72ff85f01dfde1939b3b39f5bfbe6ce9b0d872c0"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.940424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"feb5570d56d1e79ecee00885ef406d398bfec8ee4a67f6c86ef5685e73531f5e"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.950346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5b8dcaf3638d6c291fb2795ea7a8131254403330e9748aa7e2109e8a78104f43"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.950394 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0926a93c0a0adae31255166193482048df4bd391242dc6d02f46f4c0eec4d283"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.966274 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.966275 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv" event={"ID":"c99112e4-bf15-412c-89dd-a68b4bd43dd5","Type":"ContainerDied","Data":"a34bd003dbfe27a748b6ef6b937a0232b82dc5c4ea16658e1a6171dd82e6443a"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.966422 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a34bd003dbfe27a748b6ef6b937a0232b82dc5c4ea16658e1a6171dd82e6443a" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.971339 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"befd620c-6219-4da8-95b0-4514b81117b7","Type":"ContainerStarted","Data":"86a7f0008d83419b83b80ce7adc61cddcdc9ac6dc77ac9c91f65ebefd16a6258"} Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.978392 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnn7h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.978450 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jnn7h" podUID="6fff6531-8ffa-478f-977b-a9daf12938fe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.979107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.980407 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-jnn7h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.980453 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jnn7h" podUID="6fff6531-8ffa-478f-977b-a9daf12938fe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.981689 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:24 crc kubenswrapper[4770]: I0126 18:44:24.991089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.028963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.029031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7sd\" (UniqueName: \"kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.029100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.094504 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:44:25 crc kubenswrapper[4770]: W0126 18:44:25.104957 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ef1bc77_ee7c_490f_9df4_d891bcc631e6.slice/crio-a3e72e274944ec1b104674e45a3f961a81d4b2a8719f29bedd041b45def52c13 WatchSource:0}: Error finding container a3e72e274944ec1b104674e45a3f961a81d4b2a8719f29bedd041b45def52c13: Status 404 returned error can't find the container with id a3e72e274944ec1b104674e45a3f961a81d4b2a8719f29bedd041b45def52c13 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.130759 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.130870 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7sd\" (UniqueName: \"kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.131040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.133652 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.134050 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.159680 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7sd\" (UniqueName: \"kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd\") pod \"redhat-operators-g549h\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.233581 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.242989 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:44:25 crc kubenswrapper[4770]: W0126 18:44:25.259538 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef61da5_d46a_4647_9372_2ef906bc7622.slice/crio-c2025a804a93a4e3060bb3567adcc43522b26e6cbf76380768dae3f95ceb7c8a WatchSource:0}: Error finding container c2025a804a93a4e3060bb3567adcc43522b26e6cbf76380768dae3f95ceb7c8a: Status 404 returned error can't find the container with id c2025a804a93a4e3060bb3567adcc43522b26e6cbf76380768dae3f95ceb7c8a Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.297902 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:44:25 crc kubenswrapper[4770]: W0126 18:44:25.302009 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf32c63c_3381_4eff_8e21_969aaac5d74d.slice/crio-3401f94fb2feb7c8ff546d12e6440b7b3af1beb04a7227de4ec7b98f68d1867f WatchSource:0}: Error finding container 3401f94fb2feb7c8ff546d12e6440b7b3af1beb04a7227de4ec7b98f68d1867f: Status 404 returned error can't find the container with id 3401f94fb2feb7c8ff546d12e6440b7b3af1beb04a7227de4ec7b98f68d1867f Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.539267 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.588473 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.589036 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.589544 4770 patch_prober.go:28] interesting pod/console-f9d7485db-5qzkc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.589630 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5qzkc" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 18:44:25 crc kubenswrapper[4770]: W0126 18:44:25.590780 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f80ce7_123b_4717_92a7_73b09ba8c282.slice/crio-cd36f8f63040f321680857554a4b0e69d2647c515ee2bedf6107c4e296a3a7f4 WatchSource:0}: Error finding container cd36f8f63040f321680857554a4b0e69d2647c515ee2bedf6107c4e296a3a7f4: Status 404 returned error can't find the container with id cd36f8f63040f321680857554a4b0e69d2647c515ee2bedf6107c4e296a3a7f4 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.730466 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.735098 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:25 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:25 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:25 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.735245 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.979814 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerID="8a4620bb9c41f2024703e30d6408d13feb271f39c22ff7ae6bcef116b0ce6d68" exitCode=0 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.980114 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerDied","Data":"8a4620bb9c41f2024703e30d6408d13feb271f39c22ff7ae6bcef116b0ce6d68"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.980163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerStarted","Data":"cd36f8f63040f321680857554a4b0e69d2647c515ee2bedf6107c4e296a3a7f4"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.982633 4770 generic.go:334] "Generic (PLEG): container finished" podID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerID="30ea9e8f69b998145d79a548ed08f1727f8c2f8241d88eb302f711bdd37d5b69" exitCode=0 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.982723 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerDied","Data":"30ea9e8f69b998145d79a548ed08f1727f8c2f8241d88eb302f711bdd37d5b69"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.982754 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerStarted","Data":"a3e72e274944ec1b104674e45a3f961a81d4b2a8719f29bedd041b45def52c13"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.987505 4770 generic.go:334] "Generic (PLEG): container finished" podID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerID="f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73" exitCode=0 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.987576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerDied","Data":"f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.987598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerStarted","Data":"c2025a804a93a4e3060bb3567adcc43522b26e6cbf76380768dae3f95ceb7c8a"} Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.995924 4770 generic.go:334] "Generic (PLEG): container finished" podID="befd620c-6219-4da8-95b0-4514b81117b7" containerID="8150914b0332ccf0318da2284a934df8ba25ef57137dafcba409f3fd427b6934" exitCode=0 Jan 26 18:44:25 crc kubenswrapper[4770]: I0126 18:44:25.996439 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"befd620c-6219-4da8-95b0-4514b81117b7","Type":"ContainerDied","Data":"8150914b0332ccf0318da2284a934df8ba25ef57137dafcba409f3fd427b6934"} Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.001089 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4a3eadd4d4d0d66ba7561b91d67d6e5513c8063d5227e37d69d57d7d521a6440"} Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.001729 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.008571 4770 generic.go:334] "Generic (PLEG): container finished" podID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerID="7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c" exitCode=0 Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.008821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerDied","Data":"7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c"} Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.008856 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerStarted","Data":"3401f94fb2feb7c8ff546d12e6440b7b3af1beb04a7227de4ec7b98f68d1867f"} Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.029199 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lndnr" Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.729566 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:26 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:26 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:26 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:26 crc kubenswrapper[4770]: I0126 18:44:26.729660 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.317891 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.401640 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access\") pod \"befd620c-6219-4da8-95b0-4514b81117b7\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.402061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir\") pod \"befd620c-6219-4da8-95b0-4514b81117b7\" (UID: \"befd620c-6219-4da8-95b0-4514b81117b7\") " Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.402155 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "befd620c-6219-4da8-95b0-4514b81117b7" (UID: "befd620c-6219-4da8-95b0-4514b81117b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.403401 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/befd620c-6219-4da8-95b0-4514b81117b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.407545 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "befd620c-6219-4da8-95b0-4514b81117b7" (UID: "befd620c-6219-4da8-95b0-4514b81117b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.504889 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/befd620c-6219-4da8-95b0-4514b81117b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.776225 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:27 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:27 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:27 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:27 crc kubenswrapper[4770]: I0126 18:44:27.776290 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.030004 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:44:28 crc kubenswrapper[4770]: E0126 18:44:28.032931 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befd620c-6219-4da8-95b0-4514b81117b7" containerName="pruner" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.033060 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="befd620c-6219-4da8-95b0-4514b81117b7" containerName="pruner" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.033313 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="befd620c-6219-4da8-95b0-4514b81117b7" containerName="pruner" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.033859 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.036893 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.037169 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.037763 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"befd620c-6219-4da8-95b0-4514b81117b7","Type":"ContainerDied","Data":"86a7f0008d83419b83b80ce7adc61cddcdc9ac6dc77ac9c91f65ebefd16a6258"} Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.037796 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.037810 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a7f0008d83419b83b80ce7adc61cddcdc9ac6dc77ac9c91f65ebefd16a6258" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.038028 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.112277 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.112768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.213931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.214002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.214094 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.243674 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.365770 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.729781 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:28 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:28 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:28 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.730306 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:28 crc kubenswrapper[4770]: I0126 18:44:28.931174 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 18:44:28 crc kubenswrapper[4770]: W0126 18:44:28.946215 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7f9fb46f_4aac_46de_9cf1_26ea43ad2649.slice/crio-2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b WatchSource:0}: Error finding container 2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b: Status 404 returned error can't find the container with id 2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b Jan 26 18:44:29 crc kubenswrapper[4770]: I0126 18:44:29.049502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7f9fb46f-4aac-46de-9cf1-26ea43ad2649","Type":"ContainerStarted","Data":"2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b"} Jan 26 18:44:29 crc kubenswrapper[4770]: I0126 18:44:29.730072 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:29 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:29 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:29 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:29 crc kubenswrapper[4770]: I0126 18:44:29.730578 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.090994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7f9fb46f-4aac-46de-9cf1-26ea43ad2649","Type":"ContainerStarted","Data":"0f34ade6d4ec382c28d8235fa0521d226a51b10bdc0906a68bc197ecf3a71106"} Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.118278 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.118255586 podStartE2EDuration="2.118255586s" podCreationTimestamp="2026-01-26 18:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:44:30.116268381 +0000 UTC m=+154.681175123" watchObservedRunningTime="2026-01-26 18:44:30.118255586 +0000 UTC m=+154.683162318" Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.330659 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.330745 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.740272 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:30 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:30 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:30 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.740478 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:30 crc kubenswrapper[4770]: I0126 18:44:30.807760 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lb2h8" Jan 26 18:44:31 crc kubenswrapper[4770]: I0126 18:44:31.105614 4770 generic.go:334] "Generic (PLEG): container finished" podID="7f9fb46f-4aac-46de-9cf1-26ea43ad2649" containerID="0f34ade6d4ec382c28d8235fa0521d226a51b10bdc0906a68bc197ecf3a71106" exitCode=0 Jan 26 18:44:31 crc kubenswrapper[4770]: I0126 18:44:31.105656 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7f9fb46f-4aac-46de-9cf1-26ea43ad2649","Type":"ContainerDied","Data":"0f34ade6d4ec382c28d8235fa0521d226a51b10bdc0906a68bc197ecf3a71106"} Jan 26 18:44:31 crc kubenswrapper[4770]: I0126 18:44:31.730481 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:31 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:31 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:31 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:31 crc kubenswrapper[4770]: I0126 18:44:31.730540 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:33 crc kubenswrapper[4770]: I0126 18:44:32.729399 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:33 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:33 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:33 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:33 crc kubenswrapper[4770]: I0126 18:44:32.729669 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:33 crc kubenswrapper[4770]: I0126 18:44:33.728674 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:33 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:33 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:33 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:33 crc kubenswrapper[4770]: I0126 18:44:33.728761 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:34 crc kubenswrapper[4770]: I0126 18:44:34.729765 4770 patch_prober.go:28] interesting pod/router-default-5444994796-tl5vr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 18:44:34 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Jan 26 18:44:34 crc kubenswrapper[4770]: [+]process-running ok Jan 26 18:44:34 crc kubenswrapper[4770]: healthz check failed Jan 26 18:44:34 crc kubenswrapper[4770]: I0126 18:44:34.729829 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tl5vr" podUID="7bd0341c-5414-42a6-988e-b05c09a2c5c8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 18:44:34 crc kubenswrapper[4770]: I0126 18:44:34.982103 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jnn7h" Jan 26 18:44:35 crc kubenswrapper[4770]: I0126 18:44:35.596395 4770 patch_prober.go:28] interesting pod/console-f9d7485db-5qzkc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 26 18:44:35 crc kubenswrapper[4770]: I0126 18:44:35.596486 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5qzkc" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 26 18:44:35 crc kubenswrapper[4770]: I0126 18:44:35.730652 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:35 crc kubenswrapper[4770]: I0126 18:44:35.732774 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tl5vr" Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.284451 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.359876 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access\") pod \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.359971 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir\") pod \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\" (UID: \"7f9fb46f-4aac-46de-9cf1-26ea43ad2649\") " Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.360399 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f9fb46f-4aac-46de-9cf1-26ea43ad2649" (UID: "7f9fb46f-4aac-46de-9cf1-26ea43ad2649"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.367920 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f9fb46f-4aac-46de-9cf1-26ea43ad2649" (UID: "7f9fb46f-4aac-46de-9cf1-26ea43ad2649"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.461085 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:37 crc kubenswrapper[4770]: I0126 18:44:37.461116 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f9fb46f-4aac-46de-9cf1-26ea43ad2649-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:44:38 crc kubenswrapper[4770]: I0126 18:44:38.145373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7f9fb46f-4aac-46de-9cf1-26ea43ad2649","Type":"ContainerDied","Data":"2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b"} Jan 26 18:44:38 crc kubenswrapper[4770]: I0126 18:44:38.145432 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1c2291c824a3f604fc481a9c71810f57a0dd6a4f560ad125ccb88f721c2f0b" Jan 26 18:44:38 crc kubenswrapper[4770]: I0126 18:44:38.145504 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 18:44:39 crc kubenswrapper[4770]: I0126 18:44:39.296470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:39 crc kubenswrapper[4770]: I0126 18:44:39.306685 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f836a816-01c1-448b-9736-c65a8f4f0044-metrics-certs\") pod \"network-metrics-daemon-bqfpk\" (UID: \"f836a816-01c1-448b-9736-c65a8f4f0044\") " pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:39 crc kubenswrapper[4770]: I0126 18:44:39.492567 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bqfpk" Jan 26 18:44:43 crc kubenswrapper[4770]: I0126 18:44:43.402810 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:44:45 crc kubenswrapper[4770]: I0126 18:44:45.831619 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:45 crc kubenswrapper[4770]: I0126 18:44:45.836661 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:44:55 crc kubenswrapper[4770]: E0126 18:44:55.330227 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 18:44:55 crc kubenswrapper[4770]: E0126 18:44:55.330778 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlr85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dskv9_openshift-marketplace(5ef61da5-d46a-4647-9372-2ef906bc7622): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:44:55 crc kubenswrapper[4770]: E0126 18:44:55.332011 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dskv9" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" Jan 26 18:44:55 crc kubenswrapper[4770]: I0126 18:44:55.699932 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.708256 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dskv9" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.792076 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.792263 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzngv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2hflw_openshift-marketplace(ec165c57-f43f-4dbe-9768-bbfbab10826c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.793453 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2hflw" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.820652 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.820813 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-929kx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zbq6m_openshift-marketplace(05fe33d7-6976-43c6-aa31-31751ac4f332): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:44:56 crc kubenswrapper[4770]: E0126 18:44:56.822131 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zbq6m" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.122994 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zbq6m" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.123052 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2hflw" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.192198 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.192376 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsntc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-f9hfx_openshift-marketplace(8ef1bc77-ee7c-490f-9df4-d891bcc631e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.193557 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-f9hfx" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.199597 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.199855 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvczs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qmc66_openshift-marketplace(0d23ddd5-e513-4a80-89ab-28f99522aaa8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:44:58 crc kubenswrapper[4770]: E0126 18:44:58.201038 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qmc66" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.136381 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr"] Jan 26 18:45:00 crc kubenswrapper[4770]: E0126 18:45:00.137074 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9fb46f-4aac-46de-9cf1-26ea43ad2649" containerName="pruner" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.137088 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9fb46f-4aac-46de-9cf1-26ea43ad2649" containerName="pruner" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.137185 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9fb46f-4aac-46de-9cf1-26ea43ad2649" containerName="pruner" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.137624 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.139607 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.139965 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.150225 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr"] Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.205273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.205313 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml552\" (UniqueName: \"kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.205340 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.306448 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.306822 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml552\" (UniqueName: \"kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.307220 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.307398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.313923 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.322050 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml552\" (UniqueName: \"kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552\") pod \"collect-profiles-29490885-f2mqr\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.330956 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.331000 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:45:00 crc kubenswrapper[4770]: I0126 18:45:00.458812 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.230182 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.231187 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.235946 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.240606 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.242132 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.318104 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.318156 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.375322 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-f9hfx" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.375359 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qmc66" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.420280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.420344 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.420515 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.454288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.515318 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.515476 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g549h_openshift-marketplace(f4f80ce7-123b-4717-92a7-73b09ba8c282): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.516937 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g549h" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.558213 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.578409 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.578542 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85g72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tp628_openshift-marketplace(46328d44-acf0-4a1f-86c9-c2c08d21640e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.579984 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tp628" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.608547 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.608798 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2l6x4_openshift-marketplace(df32c63c-3381-4eff-8e21-969aaac5d74d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 18:45:01 crc kubenswrapper[4770]: E0126 18:45:01.609905 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2l6x4" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.810425 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bqfpk"] Jan 26 18:45:01 crc kubenswrapper[4770]: W0126 18:45:01.817591 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf836a816_01c1_448b_9736_c65a8f4f0044.slice/crio-429c7a88910dd652665160928babf6c38cba1bd9ddb644e158bca81f52982318 WatchSource:0}: Error finding container 429c7a88910dd652665160928babf6c38cba1bd9ddb644e158bca81f52982318: Status 404 returned error can't find the container with id 429c7a88910dd652665160928babf6c38cba1bd9ddb644e158bca81f52982318 Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.886202 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr"] Jan 26 18:45:01 crc kubenswrapper[4770]: W0126 18:45:01.892279 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d7e7ec_f398_4606_8122_8338323b36c4.slice/crio-7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5 WatchSource:0}: Error finding container 7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5: Status 404 returned error can't find the container with id 7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5 Jan 26 18:45:01 crc kubenswrapper[4770]: I0126 18:45:01.947231 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 18:45:01 crc kubenswrapper[4770]: W0126 18:45:01.981945 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7884ba71_c00a_412b_9b60_dd14a5fbb529.slice/crio-83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735 WatchSource:0}: Error finding container 83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735: Status 404 returned error can't find the container with id 83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735 Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.310309 4770 generic.go:334] "Generic (PLEG): container finished" podID="04d7e7ec-f398-4606-8122-8338323b36c4" containerID="7a36f0dae3ac18b4ecbce87ef62c14789841b13caaf93e6187e83077e50922da" exitCode=0 Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.310366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" event={"ID":"04d7e7ec-f398-4606-8122-8338323b36c4","Type":"ContainerDied","Data":"7a36f0dae3ac18b4ecbce87ef62c14789841b13caaf93e6187e83077e50922da"} Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.310672 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" event={"ID":"04d7e7ec-f398-4606-8122-8338323b36c4","Type":"ContainerStarted","Data":"7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5"} Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.311855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7884ba71-c00a-412b-9b60-dd14a5fbb529","Type":"ContainerStarted","Data":"83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735"} Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.313313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" event={"ID":"f836a816-01c1-448b-9736-c65a8f4f0044","Type":"ContainerStarted","Data":"44976d48415a51ec5517d7b4ff869ed817d839c574fd30b2d4b5ff3e2b79ea58"} Jan 26 18:45:02 crc kubenswrapper[4770]: I0126 18:45:02.313341 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" event={"ID":"f836a816-01c1-448b-9736-c65a8f4f0044","Type":"ContainerStarted","Data":"429c7a88910dd652665160928babf6c38cba1bd9ddb644e158bca81f52982318"} Jan 26 18:45:02 crc kubenswrapper[4770]: E0126 18:45:02.314818 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tp628" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" Jan 26 18:45:02 crc kubenswrapper[4770]: E0126 18:45:02.314867 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2l6x4" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" Jan 26 18:45:02 crc kubenswrapper[4770]: E0126 18:45:02.314929 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g549h" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.319462 4770 generic.go:334] "Generic (PLEG): container finished" podID="7884ba71-c00a-412b-9b60-dd14a5fbb529" containerID="2a1da0ee8b3e430e5a85cca1409b9f52908afa3ac41aa7042972008910d80268" exitCode=0 Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.319564 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7884ba71-c00a-412b-9b60-dd14a5fbb529","Type":"ContainerDied","Data":"2a1da0ee8b3e430e5a85cca1409b9f52908afa3ac41aa7042972008910d80268"} Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.323123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bqfpk" event={"ID":"f836a816-01c1-448b-9736-c65a8f4f0044","Type":"ContainerStarted","Data":"b0ac307bec350725cc2a3353f3754f49c840bab3750dc82a1182ada545dff373"} Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.348611 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bqfpk" podStartSLOduration=167.348596462 podStartE2EDuration="2m47.348596462s" podCreationTimestamp="2026-01-26 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:45:03.34744096 +0000 UTC m=+187.912347722" watchObservedRunningTime="2026-01-26 18:45:03.348596462 +0000 UTC m=+187.913503194" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.646009 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.695872 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.754085 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml552\" (UniqueName: \"kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552\") pod \"04d7e7ec-f398-4606-8122-8338323b36c4\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.754192 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume\") pod \"04d7e7ec-f398-4606-8122-8338323b36c4\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.754211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume\") pod \"04d7e7ec-f398-4606-8122-8338323b36c4\" (UID: \"04d7e7ec-f398-4606-8122-8338323b36c4\") " Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.755509 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "04d7e7ec-f398-4606-8122-8338323b36c4" (UID: "04d7e7ec-f398-4606-8122-8338323b36c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.760274 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552" (OuterVolumeSpecName: "kube-api-access-ml552") pod "04d7e7ec-f398-4606-8122-8338323b36c4" (UID: "04d7e7ec-f398-4606-8122-8338323b36c4"). InnerVolumeSpecName "kube-api-access-ml552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.762324 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04d7e7ec-f398-4606-8122-8338323b36c4" (UID: "04d7e7ec-f398-4606-8122-8338323b36c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.855400 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml552\" (UniqueName: \"kubernetes.io/projected/04d7e7ec-f398-4606-8122-8338323b36c4-kube-api-access-ml552\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.855445 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04d7e7ec-f398-4606-8122-8338323b36c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:03 crc kubenswrapper[4770]: I0126 18:45:03.855456 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04d7e7ec-f398-4606-8122-8338323b36c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.329117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" event={"ID":"04d7e7ec-f398-4606-8122-8338323b36c4","Type":"ContainerDied","Data":"7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5"} Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.329190 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c51a2e83b9b89ab7bfc5e863d2a6bdb3d5d10a403b47386393d33d9bf6502b5" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.329228 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.695968 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.768192 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access\") pod \"7884ba71-c00a-412b-9b60-dd14a5fbb529\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.768276 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir\") pod \"7884ba71-c00a-412b-9b60-dd14a5fbb529\" (UID: \"7884ba71-c00a-412b-9b60-dd14a5fbb529\") " Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.768834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7884ba71-c00a-412b-9b60-dd14a5fbb529" (UID: "7884ba71-c00a-412b-9b60-dd14a5fbb529"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.772787 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7884ba71-c00a-412b-9b60-dd14a5fbb529" (UID: "7884ba71-c00a-412b-9b60-dd14a5fbb529"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.870150 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7884ba71-c00a-412b-9b60-dd14a5fbb529-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:04 crc kubenswrapper[4770]: I0126 18:45:04.870201 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7884ba71-c00a-412b-9b60-dd14a5fbb529-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:05 crc kubenswrapper[4770]: I0126 18:45:05.334940 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7884ba71-c00a-412b-9b60-dd14a5fbb529","Type":"ContainerDied","Data":"83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735"} Jan 26 18:45:05 crc kubenswrapper[4770]: I0126 18:45:05.334974 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f7163fa2dddc1286c43544d71e111bd6577b83ac00b819f7a5d428c5800735" Jan 26 18:45:05 crc kubenswrapper[4770]: I0126 18:45:05.335020 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.025556 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:45:06 crc kubenswrapper[4770]: E0126 18:45:06.025895 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ba71-c00a-412b-9b60-dd14a5fbb529" containerName="pruner" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.025908 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ba71-c00a-412b-9b60-dd14a5fbb529" containerName="pruner" Jan 26 18:45:06 crc kubenswrapper[4770]: E0126 18:45:06.025924 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d7e7ec-f398-4606-8122-8338323b36c4" containerName="collect-profiles" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.025930 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d7e7ec-f398-4606-8122-8338323b36c4" containerName="collect-profiles" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.026033 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d7e7ec-f398-4606-8122-8338323b36c4" containerName="collect-profiles" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.026048 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7884ba71-c00a-412b-9b60-dd14a5fbb529" containerName="pruner" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.026389 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.029998 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.030020 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.037627 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.085284 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.085570 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.085646 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.187285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.187338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.187362 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.187439 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.187881 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.204263 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access\") pod \"installer-9-crc\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.342796 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:06 crc kubenswrapper[4770]: I0126 18:45:06.742933 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 18:45:07 crc kubenswrapper[4770]: I0126 18:45:07.348998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"726d3596-cd98-4f3e-a8ae-eaf054ecd391","Type":"ContainerStarted","Data":"fff4c6206931f4f14f80d68e389f0b939fb6eee31b2ad014aec039f9a3c8d5c9"} Jan 26 18:45:07 crc kubenswrapper[4770]: I0126 18:45:07.349349 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"726d3596-cd98-4f3e-a8ae-eaf054ecd391","Type":"ContainerStarted","Data":"6bd62a5aab9223c979e58cc88dd38fa02f6245b40d78c69943947972cfc0ec4e"} Jan 26 18:45:07 crc kubenswrapper[4770]: I0126 18:45:07.379232 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.379214542 podStartE2EDuration="1.379214542s" podCreationTimestamp="2026-01-26 18:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:45:07.376318842 +0000 UTC m=+191.941225584" watchObservedRunningTime="2026-01-26 18:45:07.379214542 +0000 UTC m=+191.944121274" Jan 26 18:45:12 crc kubenswrapper[4770]: I0126 18:45:12.383466 4770 generic.go:334] "Generic (PLEG): container finished" podID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerID="539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7" exitCode=0 Jan 26 18:45:12 crc kubenswrapper[4770]: I0126 18:45:12.383548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerDied","Data":"539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7"} Jan 26 18:45:12 crc kubenswrapper[4770]: I0126 18:45:12.386855 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerID="f71cec6d39bb00d75a46d652412ae28981e39a247e95ac064e9baa7245238649" exitCode=0 Jan 26 18:45:12 crc kubenswrapper[4770]: I0126 18:45:12.386893 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerDied","Data":"f71cec6d39bb00d75a46d652412ae28981e39a247e95ac064e9baa7245238649"} Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.397171 4770 generic.go:334] "Generic (PLEG): container finished" podID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerID="2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4" exitCode=0 Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.397454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerDied","Data":"2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4"} Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.400221 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerStarted","Data":"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d"} Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.403850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerStarted","Data":"d87c90d6ddf1bb1de86e9dd41d07f3ec4f478696078ba21fe056b5173d43f60a"} Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.433433 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dskv9" podStartSLOduration=3.640332862 podStartE2EDuration="50.433416587s" podCreationTimestamp="2026-01-26 18:44:23 +0000 UTC" firstStartedPulling="2026-01-26 18:44:25.990255459 +0000 UTC m=+150.555162191" lastFinishedPulling="2026-01-26 18:45:12.783339184 +0000 UTC m=+197.348245916" observedRunningTime="2026-01-26 18:45:13.433233312 +0000 UTC m=+197.998140074" watchObservedRunningTime="2026-01-26 18:45:13.433416587 +0000 UTC m=+197.998323319" Jan 26 18:45:13 crc kubenswrapper[4770]: I0126 18:45:13.453394 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hflw" podStartSLOduration=3.377943733 podStartE2EDuration="52.453376176s" podCreationTimestamp="2026-01-26 18:44:21 +0000 UTC" firstStartedPulling="2026-01-26 18:44:23.896947474 +0000 UTC m=+148.461854206" lastFinishedPulling="2026-01-26 18:45:12.972379907 +0000 UTC m=+197.537286649" observedRunningTime="2026-01-26 18:45:13.447966888 +0000 UTC m=+198.012873620" watchObservedRunningTime="2026-01-26 18:45:13.453376176 +0000 UTC m=+198.018282908" Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.418755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerStarted","Data":"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be"} Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.420835 4770 generic.go:334] "Generic (PLEG): container finished" podID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerID="667d8c8ecff6a16e524a6aa4a94d82d80035775dcc46cb4344ee4aeaea2b3202" exitCode=0 Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.420904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerDied","Data":"667d8c8ecff6a16e524a6aa4a94d82d80035775dcc46cb4344ee4aeaea2b3202"} Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.423281 4770 generic.go:334] "Generic (PLEG): container finished" podID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerID="bbf036292468797dffb18dd6f534a46db72fcbdc85ba7ad9f6d05382c3b9d74d" exitCode=0 Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.423301 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerDied","Data":"bbf036292468797dffb18dd6f534a46db72fcbdc85ba7ad9f6d05382c3b9d74d"} Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.446687 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zbq6m" podStartSLOduration=2.536866079 podStartE2EDuration="53.446671355s" podCreationTimestamp="2026-01-26 18:44:21 +0000 UTC" firstStartedPulling="2026-01-26 18:44:22.855717792 +0000 UTC m=+147.420624524" lastFinishedPulling="2026-01-26 18:45:13.765523068 +0000 UTC m=+198.330429800" observedRunningTime="2026-01-26 18:45:14.446287334 +0000 UTC m=+199.011194076" watchObservedRunningTime="2026-01-26 18:45:14.446671355 +0000 UTC m=+199.011578077" Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.579568 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:45:14 crc kubenswrapper[4770]: I0126 18:45:14.579609 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.430905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerStarted","Data":"e6b4b4445ee31e84ae838cec1280ae30e03d924d8f5294244346073a06af7912"} Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.433060 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerStarted","Data":"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca"} Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.435239 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerStarted","Data":"38bcd068abd6e1ec1287235d0a17af449e05b848bcd9e062c45697210e94ec2a"} Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.437843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerStarted","Data":"e558f3da9787667f79a15ca152c25fcc21bbeac4a50b8b4bba454e73f1c772ec"} Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.449700 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qmc66" podStartSLOduration=3.475424667 podStartE2EDuration="54.449677639s" podCreationTimestamp="2026-01-26 18:44:21 +0000 UTC" firstStartedPulling="2026-01-26 18:44:23.890326561 +0000 UTC m=+148.455233293" lastFinishedPulling="2026-01-26 18:45:14.864579533 +0000 UTC m=+199.429486265" observedRunningTime="2026-01-26 18:45:15.446649716 +0000 UTC m=+200.011556438" watchObservedRunningTime="2026-01-26 18:45:15.449677639 +0000 UTC m=+200.014584371" Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.497419 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9hfx" podStartSLOduration=3.571736714 podStartE2EDuration="52.497399551s" podCreationTimestamp="2026-01-26 18:44:23 +0000 UTC" firstStartedPulling="2026-01-26 18:44:25.987158183 +0000 UTC m=+150.552064915" lastFinishedPulling="2026-01-26 18:45:14.91282102 +0000 UTC m=+199.477727752" observedRunningTime="2026-01-26 18:45:15.496638521 +0000 UTC m=+200.061545253" watchObservedRunningTime="2026-01-26 18:45:15.497399551 +0000 UTC m=+200.062306283" Jan 26 18:45:15 crc kubenswrapper[4770]: I0126 18:45:15.641058 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dskv9" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="registry-server" probeResult="failure" output=< Jan 26 18:45:15 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 18:45:15 crc kubenswrapper[4770]: > Jan 26 18:45:16 crc kubenswrapper[4770]: I0126 18:45:16.446496 4770 generic.go:334] "Generic (PLEG): container finished" podID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerID="b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca" exitCode=0 Jan 26 18:45:16 crc kubenswrapper[4770]: I0126 18:45:16.446580 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerDied","Data":"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca"} Jan 26 18:45:16 crc kubenswrapper[4770]: I0126 18:45:16.451704 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerID="38bcd068abd6e1ec1287235d0a17af449e05b848bcd9e062c45697210e94ec2a" exitCode=0 Jan 26 18:45:16 crc kubenswrapper[4770]: I0126 18:45:16.451759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerDied","Data":"38bcd068abd6e1ec1287235d0a17af449e05b848bcd9e062c45697210e94ec2a"} Jan 26 18:45:20 crc kubenswrapper[4770]: I0126 18:45:20.475849 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerStarted","Data":"647aed8a9eba0f140c6972c59da0ed24dfe24b65e9939f1596652e9f238be83d"} Jan 26 18:45:21 crc kubenswrapper[4770]: I0126 18:45:21.813566 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:45:21 crc kubenswrapper[4770]: I0126 18:45:21.813938 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:45:21 crc kubenswrapper[4770]: I0126 18:45:21.870288 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:45:21 crc kubenswrapper[4770]: I0126 18:45:21.890943 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g549h" podStartSLOduration=4.445785516 podStartE2EDuration="57.890925325s" podCreationTimestamp="2026-01-26 18:44:24 +0000 UTC" firstStartedPulling="2026-01-26 18:44:25.983213884 +0000 UTC m=+150.548120616" lastFinishedPulling="2026-01-26 18:45:19.428353693 +0000 UTC m=+203.993260425" observedRunningTime="2026-01-26 18:45:20.50637885 +0000 UTC m=+205.071285582" watchObservedRunningTime="2026-01-26 18:45:21.890925325 +0000 UTC m=+206.455832057" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.057315 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.057378 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.102305 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.274575 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.274674 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.311665 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.490794 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerStarted","Data":"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff"} Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.532331 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.538131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.550012 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:22 crc kubenswrapper[4770]: I0126 18:45:22.554225 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2l6x4" podStartSLOduration=3.662879629 podStartE2EDuration="58.554208619s" podCreationTimestamp="2026-01-26 18:44:24 +0000 UTC" firstStartedPulling="2026-01-26 18:44:26.026339328 +0000 UTC m=+150.591246060" lastFinishedPulling="2026-01-26 18:45:20.917668318 +0000 UTC m=+205.482575050" observedRunningTime="2026-01-26 18:45:22.519895946 +0000 UTC m=+207.084802678" watchObservedRunningTime="2026-01-26 18:45:22.554208619 +0000 UTC m=+207.119115351" Jan 26 18:45:23 crc kubenswrapper[4770]: I0126 18:45:23.994908 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.198589 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.501230 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qmc66" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="registry-server" containerID="cri-o://e6b4b4445ee31e84ae838cec1280ae30e03d924d8f5294244346073a06af7912" gracePeriod=2 Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.501536 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hflw" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="registry-server" containerID="cri-o://d87c90d6ddf1bb1de86e9dd41d07f3ec4f478696078ba21fe056b5173d43f60a" gracePeriod=2 Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.583295 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.583416 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.618675 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.620667 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.669176 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.838422 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:45:24 crc kubenswrapper[4770]: I0126 18:45:24.838495 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:45:25 crc kubenswrapper[4770]: I0126 18:45:25.244429 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:25 crc kubenswrapper[4770]: I0126 18:45:25.244512 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:25 crc kubenswrapper[4770]: I0126 18:45:25.876303 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2l6x4" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="registry-server" probeResult="failure" output=< Jan 26 18:45:25 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 18:45:25 crc kubenswrapper[4770]: > Jan 26 18:45:25 crc kubenswrapper[4770]: I0126 18:45:25.916567 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.291100 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g549h" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="registry-server" probeResult="failure" output=< Jan 26 18:45:26 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 18:45:26 crc kubenswrapper[4770]: > Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.520805 4770 generic.go:334] "Generic (PLEG): container finished" podID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerID="e6b4b4445ee31e84ae838cec1280ae30e03d924d8f5294244346073a06af7912" exitCode=0 Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.520885 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerDied","Data":"e6b4b4445ee31e84ae838cec1280ae30e03d924d8f5294244346073a06af7912"} Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.523475 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerID="d87c90d6ddf1bb1de86e9dd41d07f3ec4f478696078ba21fe056b5173d43f60a" exitCode=0 Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.523541 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerDied","Data":"d87c90d6ddf1bb1de86e9dd41d07f3ec4f478696078ba21fe056b5173d43f60a"} Jan 26 18:45:26 crc kubenswrapper[4770]: I0126 18:45:26.598190 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:45:27 crc kubenswrapper[4770]: I0126 18:45:27.529624 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f9hfx" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="registry-server" containerID="cri-o://e558f3da9787667f79a15ca152c25fcc21bbeac4a50b8b4bba454e73f1c772ec" gracePeriod=2 Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.019408 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.104527 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities\") pod \"ec165c57-f43f-4dbe-9768-bbfbab10826c\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.104583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzngv\" (UniqueName: \"kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv\") pod \"ec165c57-f43f-4dbe-9768-bbfbab10826c\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.104632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content\") pod \"ec165c57-f43f-4dbe-9768-bbfbab10826c\" (UID: \"ec165c57-f43f-4dbe-9768-bbfbab10826c\") " Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.106288 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities" (OuterVolumeSpecName: "utilities") pod "ec165c57-f43f-4dbe-9768-bbfbab10826c" (UID: "ec165c57-f43f-4dbe-9768-bbfbab10826c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.115649 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv" (OuterVolumeSpecName: "kube-api-access-bzngv") pod "ec165c57-f43f-4dbe-9768-bbfbab10826c" (UID: "ec165c57-f43f-4dbe-9768-bbfbab10826c"). InnerVolumeSpecName "kube-api-access-bzngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.205935 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.206252 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzngv\" (UniqueName: \"kubernetes.io/projected/ec165c57-f43f-4dbe-9768-bbfbab10826c-kube-api-access-bzngv\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.538354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hflw" event={"ID":"ec165c57-f43f-4dbe-9768-bbfbab10826c","Type":"ContainerDied","Data":"42827e7d731bec4daa40f2480531ca8dc88825d26f509aad8548d5805933cf66"} Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.538741 4770 scope.go:117] "RemoveContainer" containerID="d87c90d6ddf1bb1de86e9dd41d07f3ec4f478696078ba21fe056b5173d43f60a" Jan 26 18:45:28 crc kubenswrapper[4770]: I0126 18:45:28.538423 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hflw" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.032509 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.124232 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvczs\" (UniqueName: \"kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs\") pod \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.124277 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities\") pod \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.124363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content\") pod \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\" (UID: \"0d23ddd5-e513-4a80-89ab-28f99522aaa8\") " Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.125120 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities" (OuterVolumeSpecName: "utilities") pod "0d23ddd5-e513-4a80-89ab-28f99522aaa8" (UID: "0d23ddd5-e513-4a80-89ab-28f99522aaa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.128687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs" (OuterVolumeSpecName: "kube-api-access-cvczs") pod "0d23ddd5-e513-4a80-89ab-28f99522aaa8" (UID: "0d23ddd5-e513-4a80-89ab-28f99522aaa8"). InnerVolumeSpecName "kube-api-access-cvczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.166380 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d23ddd5-e513-4a80-89ab-28f99522aaa8" (UID: "0d23ddd5-e513-4a80-89ab-28f99522aaa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.174647 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec165c57-f43f-4dbe-9768-bbfbab10826c" (UID: "ec165c57-f43f-4dbe-9768-bbfbab10826c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.198275 4770 scope.go:117] "RemoveContainer" containerID="f71cec6d39bb00d75a46d652412ae28981e39a247e95ac064e9baa7245238649" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.215378 4770 scope.go:117] "RemoveContainer" containerID="cd905c4689077c61fa0e326635b24a2d6fe28583530927dffdfbd681ebcd43f8" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.226164 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvczs\" (UniqueName: \"kubernetes.io/projected/0d23ddd5-e513-4a80-89ab-28f99522aaa8-kube-api-access-cvczs\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.226195 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.226204 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec165c57-f43f-4dbe-9768-bbfbab10826c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.226213 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d23ddd5-e513-4a80-89ab-28f99522aaa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.474503 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.490053 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hflw"] Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.548551 4770 generic.go:334] "Generic (PLEG): container finished" podID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerID="e558f3da9787667f79a15ca152c25fcc21bbeac4a50b8b4bba454e73f1c772ec" exitCode=0 Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.548663 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerDied","Data":"e558f3da9787667f79a15ca152c25fcc21bbeac4a50b8b4bba454e73f1c772ec"} Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.552304 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qmc66" event={"ID":"0d23ddd5-e513-4a80-89ab-28f99522aaa8","Type":"ContainerDied","Data":"1e8a1b52292c97fdd24f0bccaf8cb730a396d7ee2551ef78b40a27019cb08a28"} Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.552408 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qmc66" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.552466 4770 scope.go:117] "RemoveContainer" containerID="e6b4b4445ee31e84ae838cec1280ae30e03d924d8f5294244346073a06af7912" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.570154 4770 scope.go:117] "RemoveContainer" containerID="bbf036292468797dffb18dd6f534a46db72fcbdc85ba7ad9f6d05382c3b9d74d" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.585847 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.586056 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qmc66"] Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.599989 4770 scope.go:117] "RemoveContainer" containerID="7faf23e8e404a390e4bcdb1fa872780101f871a8b9488baf7f9c91033aa979be" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.778906 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" path="/var/lib/kubelet/pods/0d23ddd5-e513-4a80-89ab-28f99522aaa8/volumes" Jan 26 18:45:29 crc kubenswrapper[4770]: I0126 18:45:29.780115 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" path="/var/lib/kubelet/pods/ec165c57-f43f-4dbe-9768-bbfbab10826c/volumes" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.331201 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.331251 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.331296 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.331907 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.332014 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573" gracePeriod=600 Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.503035 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.557739 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerStarted","Data":"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548"} Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.560045 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hfx" event={"ID":"8ef1bc77-ee7c-490f-9df4-d891bcc631e6","Type":"ContainerDied","Data":"a3e72e274944ec1b104674e45a3f961a81d4b2a8719f29bedd041b45def52c13"} Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.560114 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hfx" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.560121 4770 scope.go:117] "RemoveContainer" containerID="e558f3da9787667f79a15ca152c25fcc21bbeac4a50b8b4bba454e73f1c772ec" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.575873 4770 scope.go:117] "RemoveContainer" containerID="667d8c8ecff6a16e524a6aa4a94d82d80035775dcc46cb4344ee4aeaea2b3202" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.591579 4770 scope.go:117] "RemoveContainer" containerID="30ea9e8f69b998145d79a548ed08f1727f8c2f8241d88eb302f711bdd37d5b69" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.645197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content\") pod \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.645290 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsntc\" (UniqueName: \"kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc\") pod \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.645325 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities\") pod \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\" (UID: \"8ef1bc77-ee7c-490f-9df4-d891bcc631e6\") " Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.646616 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities" (OuterVolumeSpecName: "utilities") pod "8ef1bc77-ee7c-490f-9df4-d891bcc631e6" (UID: "8ef1bc77-ee7c-490f-9df4-d891bcc631e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.650790 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc" (OuterVolumeSpecName: "kube-api-access-tsntc") pod "8ef1bc77-ee7c-490f-9df4-d891bcc631e6" (UID: "8ef1bc77-ee7c-490f-9df4-d891bcc631e6"). InnerVolumeSpecName "kube-api-access-tsntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.664811 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ef1bc77-ee7c-490f-9df4-d891bcc631e6" (UID: "8ef1bc77-ee7c-490f-9df4-d891bcc631e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.747365 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.747413 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsntc\" (UniqueName: \"kubernetes.io/projected/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-kube-api-access-tsntc\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.747437 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef1bc77-ee7c-490f-9df4-d891bcc631e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.916097 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:45:30 crc kubenswrapper[4770]: I0126 18:45:30.919902 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hfx"] Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.569167 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573" exitCode=0 Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.569265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573"} Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.569298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9"} Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.574065 4770 generic.go:334] "Generic (PLEG): container finished" podID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerID="60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548" exitCode=0 Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.574151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerDied","Data":"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548"} Jan 26 18:45:31 crc kubenswrapper[4770]: I0126 18:45:31.779484 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" path="/var/lib/kubelet/pods/8ef1bc77-ee7c-490f-9df4-d891bcc631e6/volumes" Jan 26 18:45:32 crc kubenswrapper[4770]: I0126 18:45:32.585044 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerStarted","Data":"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429"} Jan 26 18:45:32 crc kubenswrapper[4770]: I0126 18:45:32.607523 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tp628" podStartSLOduration=2.484883812 podStartE2EDuration="1m11.607505989s" podCreationTimestamp="2026-01-26 18:44:21 +0000 UTC" firstStartedPulling="2026-01-26 18:44:22.869926505 +0000 UTC m=+147.434833237" lastFinishedPulling="2026-01-26 18:45:31.992548692 +0000 UTC m=+216.557455414" observedRunningTime="2026-01-26 18:45:32.604663071 +0000 UTC m=+217.169569863" watchObservedRunningTime="2026-01-26 18:45:32.607505989 +0000 UTC m=+217.172412731" Jan 26 18:45:34 crc kubenswrapper[4770]: I0126 18:45:34.024794 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2b2nm"] Jan 26 18:45:34 crc kubenswrapper[4770]: I0126 18:45:34.873906 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:45:34 crc kubenswrapper[4770]: I0126 18:45:34.912809 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:45:35 crc kubenswrapper[4770]: I0126 18:45:35.304066 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:35 crc kubenswrapper[4770]: I0126 18:45:35.350860 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:38 crc kubenswrapper[4770]: I0126 18:45:38.995916 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:45:38 crc kubenswrapper[4770]: I0126 18:45:38.996426 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g549h" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="registry-server" containerID="cri-o://647aed8a9eba0f140c6972c59da0ed24dfe24b65e9939f1596652e9f238be83d" gracePeriod=2 Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.631050 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerID="647aed8a9eba0f140c6972c59da0ed24dfe24b65e9939f1596652e9f238be83d" exitCode=0 Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.631116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerDied","Data":"647aed8a9eba0f140c6972c59da0ed24dfe24b65e9939f1596652e9f238be83d"} Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.919765 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.987674 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content\") pod \"f4f80ce7-123b-4717-92a7-73b09ba8c282\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.987756 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities\") pod \"f4f80ce7-123b-4717-92a7-73b09ba8c282\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.987807 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq7sd\" (UniqueName: \"kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd\") pod \"f4f80ce7-123b-4717-92a7-73b09ba8c282\" (UID: \"f4f80ce7-123b-4717-92a7-73b09ba8c282\") " Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.989212 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities" (OuterVolumeSpecName: "utilities") pod "f4f80ce7-123b-4717-92a7-73b09ba8c282" (UID: "f4f80ce7-123b-4717-92a7-73b09ba8c282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:39 crc kubenswrapper[4770]: I0126 18:45:39.996204 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd" (OuterVolumeSpecName: "kube-api-access-lq7sd") pod "f4f80ce7-123b-4717-92a7-73b09ba8c282" (UID: "f4f80ce7-123b-4717-92a7-73b09ba8c282"). InnerVolumeSpecName "kube-api-access-lq7sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.090889 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq7sd\" (UniqueName: \"kubernetes.io/projected/f4f80ce7-123b-4717-92a7-73b09ba8c282-kube-api-access-lq7sd\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.090948 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.111759 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4f80ce7-123b-4717-92a7-73b09ba8c282" (UID: "f4f80ce7-123b-4717-92a7-73b09ba8c282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.191713 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f80ce7-123b-4717-92a7-73b09ba8c282-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.637579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g549h" event={"ID":"f4f80ce7-123b-4717-92a7-73b09ba8c282","Type":"ContainerDied","Data":"cd36f8f63040f321680857554a4b0e69d2647c515ee2bedf6107c4e296a3a7f4"} Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.638056 4770 scope.go:117] "RemoveContainer" containerID="647aed8a9eba0f140c6972c59da0ed24dfe24b65e9939f1596652e9f238be83d" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.637655 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g549h" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.668131 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.671364 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g549h"] Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.672164 4770 scope.go:117] "RemoveContainer" containerID="38bcd068abd6e1ec1287235d0a17af449e05b848bcd9e062c45697210e94ec2a" Jan 26 18:45:40 crc kubenswrapper[4770]: I0126 18:45:40.689395 4770 scope.go:117] "RemoveContainer" containerID="8a4620bb9c41f2024703e30d6408d13feb271f39c22ff7ae6bcef116b0ce6d68" Jan 26 18:45:41 crc kubenswrapper[4770]: I0126 18:45:41.666835 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:45:41 crc kubenswrapper[4770]: I0126 18:45:41.666910 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:45:41 crc kubenswrapper[4770]: I0126 18:45:41.729830 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:45:41 crc kubenswrapper[4770]: I0126 18:45:41.778742 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" path="/var/lib/kubelet/pods/f4f80ce7-123b-4717-92a7-73b09ba8c282/volumes" Jan 26 18:45:42 crc kubenswrapper[4770]: I0126 18:45:42.715034 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.879806 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880140 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880158 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880171 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880179 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880191 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880198 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880209 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880216 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880234 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880242 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880256 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880267 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880278 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880288 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880307 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880316 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880326 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880334 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880346 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880360 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880385 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880395 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="extract-utilities" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.880410 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880421 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="extract-content" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880545 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f80ce7-123b-4717-92a7-73b09ba8c282" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880568 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec165c57-f43f-4dbe-9768-bbfbab10826c" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880582 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d23ddd5-e513-4a80-89ab-28f99522aaa8" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.880594 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef1bc77-ee7c-490f-9df4-d891bcc631e6" containerName="registry-server" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881066 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881301 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881511 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846" gracePeriod=15 Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881606 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed" gracePeriod=15 Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881667 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187" gracePeriod=15 Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881765 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04" gracePeriod=15 Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.881797 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82" gracePeriod=15 Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884413 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884771 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884795 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884816 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884828 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884848 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884860 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884876 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884888 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884910 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884923 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.884944 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.884955 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885192 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885217 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885248 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885274 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885296 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 18:45:44 crc kubenswrapper[4770]: E0126 18:45:44.885533 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885553 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.885815 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970367 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970658 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970755 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.970972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:44 crc kubenswrapper[4770]: I0126 18:45:44.971015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.071910 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.071957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072016 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072034 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072072 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072098 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072114 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072129 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072172 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072186 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072230 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072249 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.072295 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.593613 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.593686 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.676907 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.679551 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.681112 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04" exitCode=0 Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.681154 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed" exitCode=0 Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.681171 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82" exitCode=0 Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.681189 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187" exitCode=2 Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.681266 4770 scope.go:117] "RemoveContainer" containerID="a9a461a171c2ee7109eb9455d003479894bbb4149344b6bacf6117fed26c82a5" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.684526 4770 generic.go:334] "Generic (PLEG): container finished" podID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" containerID="fff4c6206931f4f14f80d68e389f0b939fb6eee31b2ad014aec039f9a3c8d5c9" exitCode=0 Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.684575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"726d3596-cd98-4f3e-a8ae-eaf054ecd391","Type":"ContainerDied","Data":"fff4c6206931f4f14f80d68e389f0b939fb6eee31b2ad014aec039f9a3c8d5c9"} Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.685461 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.686110 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.772323 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:45 crc kubenswrapper[4770]: I0126 18:45:45.773077 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:46 crc kubenswrapper[4770]: I0126 18:45:46.694657 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.003087 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.004020 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock\") pod \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109665 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir\") pod \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109459 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock" (OuterVolumeSpecName: "var-lock") pod "726d3596-cd98-4f3e-a8ae-eaf054ecd391" (UID: "726d3596-cd98-4f3e-a8ae-eaf054ecd391"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109728 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access\") pod \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\" (UID: \"726d3596-cd98-4f3e-a8ae-eaf054ecd391\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109773 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "726d3596-cd98-4f3e-a8ae-eaf054ecd391" (UID: "726d3596-cd98-4f3e-a8ae-eaf054ecd391"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109900 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.109912 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/726d3596-cd98-4f3e-a8ae-eaf054ecd391-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.115978 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "726d3596-cd98-4f3e-a8ae-eaf054ecd391" (UID: "726d3596-cd98-4f3e-a8ae-eaf054ecd391"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.211436 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726d3596-cd98-4f3e-a8ae-eaf054ecd391-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.251824 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.252544 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.253174 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.253759 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414766 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414906 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.414993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.415204 4770 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.415229 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.415247 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.710373 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.713203 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846" exitCode=0 Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.713351 4770 scope.go:117] "RemoveContainer" containerID="b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.713843 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.716330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"726d3596-cd98-4f3e-a8ae-eaf054ecd391","Type":"ContainerDied","Data":"6bd62a5aab9223c979e58cc88dd38fa02f6245b40d78c69943947972cfc0ec4e"} Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.716390 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd62a5aab9223c979e58cc88dd38fa02f6245b40d78c69943947972cfc0ec4e" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.716438 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.739163 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.739690 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.743103 4770 scope.go:117] "RemoveContainer" containerID="a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.757848 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.758308 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.762789 4770 scope.go:117] "RemoveContainer" containerID="e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.781362 4770 scope.go:117] "RemoveContainer" containerID="b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.789010 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.797947 4770 scope.go:117] "RemoveContainer" containerID="34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.818082 4770 scope.go:117] "RemoveContainer" containerID="d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.843570 4770 scope.go:117] "RemoveContainer" containerID="b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.844109 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\": container with ID starting with b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04 not found: ID does not exist" containerID="b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.844218 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04"} err="failed to get container status \"b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\": rpc error: code = NotFound desc = could not find container \"b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04\": container with ID starting with b2d598e95dfddc150c83c4a82064869957a622586af31e172a97a09bf1b10e04 not found: ID does not exist" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.844317 4770 scope.go:117] "RemoveContainer" containerID="a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.845160 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\": container with ID starting with a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed not found: ID does not exist" containerID="a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.845205 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed"} err="failed to get container status \"a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\": rpc error: code = NotFound desc = could not find container \"a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed\": container with ID starting with a93320ae18867ded1b5eea0f11a11eb2c06540f4eb7c1f085c0e805c898463ed not found: ID does not exist" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.845234 4770 scope.go:117] "RemoveContainer" containerID="e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.845596 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\": container with ID starting with e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82 not found: ID does not exist" containerID="e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.845726 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82"} err="failed to get container status \"e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\": rpc error: code = NotFound desc = could not find container \"e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82\": container with ID starting with e8d6e15161996728a791a48a07393fc5a53a1d20c54e5f96d422c9c356253d82 not found: ID does not exist" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.845826 4770 scope.go:117] "RemoveContainer" containerID="b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.846160 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\": container with ID starting with b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187 not found: ID does not exist" containerID="b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.846192 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187"} err="failed to get container status \"b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\": rpc error: code = NotFound desc = could not find container \"b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187\": container with ID starting with b318570c085d8c5d98a8ce06e2f9a400f002383989f734a0e63a3147857ef187 not found: ID does not exist" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.846212 4770 scope.go:117] "RemoveContainer" containerID="34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.846438 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\": container with ID starting with 34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846 not found: ID does not exist" containerID="34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.846473 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846"} err="failed to get container status \"34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\": rpc error: code = NotFound desc = could not find container \"34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846\": container with ID starting with 34baf46cfe28dd862ad8c6c71f76880c881003201013538804b49679d8691846 not found: ID does not exist" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.846494 4770 scope.go:117] "RemoveContainer" containerID="d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5" Jan 26 18:45:47 crc kubenswrapper[4770]: E0126 18:45:47.846744 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\": container with ID starting with d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5 not found: ID does not exist" containerID="d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5" Jan 26 18:45:47 crc kubenswrapper[4770]: I0126 18:45:47.846841 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5"} err="failed to get container status \"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\": rpc error: code = NotFound desc = could not find container \"d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5\": container with ID starting with d7b38a213677a996f07fccf6f8bf8c462c84ef794c7ccd883d6e983bf11ecca5 not found: ID does not exist" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.625888 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.626874 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.627565 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.628730 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.629272 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:49 crc kubenswrapper[4770]: I0126 18:45:49.629335 4770 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.629888 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.830775 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.917460 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:49 crc kubenswrapper[4770]: I0126 18:45:49.917925 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:49 crc kubenswrapper[4770]: W0126 18:45:49.950609 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1b91e6134e31ca32ab4b8b51b6ebb37bd91aa52729140c6a3522de8833b94786 WatchSource:0}: Error finding container 1b91e6134e31ca32ab4b8b51b6ebb37bd91aa52729140c6a3522de8833b94786: Status 404 returned error can't find the container with id 1b91e6134e31ca32ab4b8b51b6ebb37bd91aa52729140c6a3522de8833b94786 Jan 26 18:45:49 crc kubenswrapper[4770]: E0126 18:45:49.954367 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5c43d1b03813 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:45:49.953734675 +0000 UTC m=+234.518641407,LastTimestamp:2026-01-26 18:45:49.953734675 +0000 UTC m=+234.518641407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:45:50 crc kubenswrapper[4770]: E0126 18:45:50.232187 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Jan 26 18:45:50 crc kubenswrapper[4770]: E0126 18:45:50.388364 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e5c43d1b03813 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 18:45:49.953734675 +0000 UTC m=+234.518641407,LastTimestamp:2026-01-26 18:45:49.953734675 +0000 UTC m=+234.518641407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 18:45:50 crc kubenswrapper[4770]: I0126 18:45:50.737810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf"} Jan 26 18:45:50 crc kubenswrapper[4770]: I0126 18:45:50.737884 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b91e6134e31ca32ab4b8b51b6ebb37bd91aa52729140c6a3522de8833b94786"} Jan 26 18:45:50 crc kubenswrapper[4770]: I0126 18:45:50.738415 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:50 crc kubenswrapper[4770]: E0126 18:45:50.738739 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:45:51 crc kubenswrapper[4770]: E0126 18:45:51.034463 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Jan 26 18:45:52 crc kubenswrapper[4770]: E0126 18:45:52.636615 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Jan 26 18:45:55 crc kubenswrapper[4770]: I0126 18:45:55.773033 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:55 crc kubenswrapper[4770]: E0126 18:45:55.838266 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="6.4s" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.766356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.767750 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.789387 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.789439 4770 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367" exitCode=1 Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.789470 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367"} Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.790435 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.790720 4770 scope.go:117] "RemoveContainer" containerID="0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.791174 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.794553 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.794622 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:45:58 crc kubenswrapper[4770]: E0126 18:45:58.797121 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:58 crc kubenswrapper[4770]: I0126 18:45:58.797869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:58 crc kubenswrapper[4770]: W0126 18:45:58.823532 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-0489aeb5e397dadae24c694df895a9ad30ac333005d0a953be07666761ad38fe WatchSource:0}: Error finding container 0489aeb5e397dadae24c694df895a9ad30ac333005d0a953be07666761ad38fe: Status 404 returned error can't find the container with id 0489aeb5e397dadae24c694df895a9ad30ac333005d0a953be07666761ad38fe Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.061664 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerName="oauth-openshift" containerID="cri-o://734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43" gracePeriod=15 Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.443996 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.444851 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.445222 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.445470 4770 status_manager.go:851] "Failed to get status for pod" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2b2nm\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575268 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575323 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575351 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575410 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575439 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575429 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575491 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575523 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575559 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575615 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575649 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575681 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2lkn\" (UniqueName: \"kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn\") pod \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\" (UID: \"65b0fb1c-f1ee-475d-9c5c-55f66744622f\") " Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.575922 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.576312 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.576349 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.576357 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.576870 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.581969 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn" (OuterVolumeSpecName: "kube-api-access-h2lkn") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "kube-api-access-h2lkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.586372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.589165 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.589558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.589752 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.589892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.590144 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.592303 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.592497 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "65b0fb1c-f1ee-475d-9c5c-55f66744622f" (UID: "65b0fb1c-f1ee-475d-9c5c-55f66744622f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677281 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677321 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677337 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677350 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677364 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677374 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677386 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677400 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677414 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677426 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677438 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677450 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/65b0fb1c-f1ee-475d-9c5c-55f66744622f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.677463 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2lkn\" (UniqueName: \"kubernetes.io/projected/65b0fb1c-f1ee-475d-9c5c-55f66744622f-kube-api-access-h2lkn\") on node \"crc\" DevicePath \"\"" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.795503 4770 generic.go:334] "Generic (PLEG): container finished" podID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerID="734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43" exitCode=0 Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.795571 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.795593 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" event={"ID":"65b0fb1c-f1ee-475d-9c5c-55f66744622f","Type":"ContainerDied","Data":"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43"} Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.796779 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" event={"ID":"65b0fb1c-f1ee-475d-9c5c-55f66744622f","Type":"ContainerDied","Data":"b5ecbc0535fddc9d809079cb38bfc2806add1f6d0dd4373fc31f7a26b3ba1dcb"} Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.796810 4770 scope.go:117] "RemoveContainer" containerID="734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.799522 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.799592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9aab48192ecfac2ab6369193b1c1d8170ba0abf33e90dd78058b97a62112a0da"} Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.799910 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801091 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801651 4770 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="026587bc07c7cc1e5b866179c9759aad9d701d44f5549c5fc4cbb5287829593c" exitCode=0 Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"026587bc07c7cc1e5b866179c9759aad9d701d44f5549c5fc4cbb5287829593c"} Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0489aeb5e397dadae24c694df895a9ad30ac333005d0a953be07666761ad38fe"} Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801941 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.801964 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:45:59 crc kubenswrapper[4770]: E0126 18:45:59.803299 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.803362 4770 status_manager.go:851] "Failed to get status for pod" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2b2nm\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.804246 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.804580 4770 status_manager.go:851] "Failed to get status for pod" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.804934 4770 status_manager.go:851] "Failed to get status for pod" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" pod="openshift-authentication/oauth-openshift-558db77b4-2b2nm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2b2nm\": dial tcp 38.102.83.51:6443: connect: connection refused" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.811997 4770 scope.go:117] "RemoveContainer" containerID="734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43" Jan 26 18:45:59 crc kubenswrapper[4770]: E0126 18:45:59.812378 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43\": container with ID starting with 734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43 not found: ID does not exist" containerID="734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.812411 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43"} err="failed to get container status \"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43\": rpc error: code = NotFound desc = could not find container \"734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43\": container with ID starting with 734dce6bf1fb85075c5e4f132703af9f2a0c0932ada7d6ae0d8da2c17d246e43 not found: ID does not exist" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.860414 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.860817 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 18:45:59 crc kubenswrapper[4770]: I0126 18:45:59.860888 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 18:46:00 crc kubenswrapper[4770]: I0126 18:46:00.809744 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07e902368cfbdf28ccaa9f50557034a39838daa803bc05bb9891769c5373bbfe"} Jan 26 18:46:00 crc kubenswrapper[4770]: I0126 18:46:00.810321 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5a9ddc5b8d6099220f882158fe4486cbaa1c0f136008398dacc05462d7daba1"} Jan 26 18:46:00 crc kubenswrapper[4770]: I0126 18:46:00.810336 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3474f83f4ac0d8b13c0a661f71dac5a5ac9b6ddf1295b4039275b380f03863a6"} Jan 26 18:46:00 crc kubenswrapper[4770]: I0126 18:46:00.810347 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f7627645779fcade5ea1a2592f2f7fdd24b012c70c3d00ce29bd7cb9a1a2950"} Jan 26 18:46:01 crc kubenswrapper[4770]: I0126 18:46:01.849146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"87878fb946c198eacd2ae3a6f10cb875ebb3aa6e66aec9533efd78a977310fb8"} Jan 26 18:46:01 crc kubenswrapper[4770]: I0126 18:46:01.849521 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:01 crc kubenswrapper[4770]: I0126 18:46:01.849419 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:01 crc kubenswrapper[4770]: I0126 18:46:01.849562 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:03 crc kubenswrapper[4770]: I0126 18:46:03.797995 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:03 crc kubenswrapper[4770]: I0126 18:46:03.798336 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:03 crc kubenswrapper[4770]: I0126 18:46:03.804867 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:06 crc kubenswrapper[4770]: I0126 18:46:06.863028 4770 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:06 crc kubenswrapper[4770]: I0126 18:46:06.957355 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fb508847-0a06-4630-b26b-88592e74dd01" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.475212 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.888519 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.888559 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.892553 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fb508847-0a06-4630-b26b-88592e74dd01" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.893920 4770 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://1f7627645779fcade5ea1a2592f2f7fdd24b012c70c3d00ce29bd7cb9a1a2950" Jan 26 18:46:07 crc kubenswrapper[4770]: I0126 18:46:07.893943 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:08 crc kubenswrapper[4770]: I0126 18:46:08.895672 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:08 crc kubenswrapper[4770]: I0126 18:46:08.896287 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:08 crc kubenswrapper[4770]: I0126 18:46:08.900082 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fb508847-0a06-4630-b26b-88592e74dd01" Jan 26 18:46:09 crc kubenswrapper[4770]: I0126 18:46:09.860776 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 18:46:09 crc kubenswrapper[4770]: I0126 18:46:09.861261 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 18:46:16 crc kubenswrapper[4770]: I0126 18:46:16.110194 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 18:46:16 crc kubenswrapper[4770]: I0126 18:46:16.486740 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 18:46:16 crc kubenswrapper[4770]: I0126 18:46:16.772529 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.055034 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.354521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.652253 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.768431 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.801707 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 18:46:17 crc kubenswrapper[4770]: I0126 18:46:17.833352 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.026961 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.188594 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.317024 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.480675 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.567003 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.568538 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.672201 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 18:46:18 crc kubenswrapper[4770]: I0126 18:46:18.725605 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.157347 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.346571 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.352885 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.404432 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.411160 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.487496 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.496639 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.696012 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.725247 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.728892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.824921 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.838391 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.861209 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.861270 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.861333 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.862241 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9aab48192ecfac2ab6369193b1c1d8170ba0abf33e90dd78058b97a62112a0da"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.862488 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9aab48192ecfac2ab6369193b1c1d8170ba0abf33e90dd78058b97a62112a0da" gracePeriod=30 Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.896224 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 18:46:19 crc kubenswrapper[4770]: I0126 18:46:19.951550 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.137220 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.236260 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.262559 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.278229 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.288543 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.344303 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.383611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.462451 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.541612 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.573929 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.592011 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.625763 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.641025 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.641599 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.666107 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.709829 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.772449 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.784173 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.788273 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.796348 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2b2nm","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.796461 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.796970 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.796999 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ecd3a1f0-f0f8-44a5-9af2-11165831609e" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.805340 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.806672 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.812013 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.830914 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.830883053 podStartE2EDuration="14.830883053s" podCreationTimestamp="2026-01-26 18:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:46:20.820014583 +0000 UTC m=+265.384921335" watchObservedRunningTime="2026-01-26 18:46:20.830883053 +0000 UTC m=+265.395789815" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.933847 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 18:46:20 crc kubenswrapper[4770]: I0126 18:46:20.990455 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.162293 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.355409 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.384005 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.404562 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.527034 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.551438 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.559559 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.701155 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.708555 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.770562 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.780490 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" path="/var/lib/kubelet/pods/65b0fb1c-f1ee-475d-9c5c-55f66744622f/volumes" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.842499 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.955783 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.957957 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 18:46:21 crc kubenswrapper[4770]: I0126 18:46:21.973000 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.105983 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.198200 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.233301 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.289496 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.335637 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.378555 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.468907 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.477649 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6rk5z"] Jan 26 18:46:22 crc kubenswrapper[4770]: E0126 18:46:22.477904 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerName="oauth-openshift" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.477927 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerName="oauth-openshift" Jan 26 18:46:22 crc kubenswrapper[4770]: E0126 18:46:22.477945 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" containerName="installer" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.477956 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" containerName="installer" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.478079 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d3596-cd98-4f3e-a8ae-eaf054ecd391" containerName="installer" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.478096 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b0fb1c-f1ee-475d-9c5c-55f66744622f" containerName="oauth-openshift" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.478551 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.481097 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.482334 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483017 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483325 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483450 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483475 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483463 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.483688 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.488600 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.488687 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.489006 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.491868 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.500138 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.502366 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.508605 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.552530 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.552598 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574469 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574610 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574683 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574827 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdc2q\" (UniqueName: \"kubernetes.io/projected/eaaba038-f0c5-42ab-9373-0a8d7c2728da-kube-api-access-wdc2q\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574900 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.574967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575053 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575109 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575181 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-policies\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575299 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575418 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575455 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.575509 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-dir\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.619324 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.669783 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676704 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676800 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-dir\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676889 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676906 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.676987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdc2q\" (UniqueName: \"kubernetes.io/projected/eaaba038-f0c5-42ab-9373-0a8d7c2728da-kube-api-access-wdc2q\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677018 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677057 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677074 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677097 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-policies\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.677715 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-policies\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.678371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eaaba038-f0c5-42ab-9373-0a8d7c2728da-audit-dir\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.678840 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.679328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.679839 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.689744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.689753 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.689774 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.689786 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.690087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.690207 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.690409 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.691990 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eaaba038-f0c5-42ab-9373-0a8d7c2728da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.699405 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.703375 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdc2q\" (UniqueName: \"kubernetes.io/projected/eaaba038-f0c5-42ab-9373-0a8d7c2728da-kube-api-access-wdc2q\") pod \"oauth-openshift-96d6999f9-6rk5z\" (UID: \"eaaba038-f0c5-42ab-9373-0a8d7c2728da\") " pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.755222 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.786885 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.798901 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.836224 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 18:46:22 crc kubenswrapper[4770]: I0126 18:46:22.887924 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.102352 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.160318 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.165950 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.177371 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.187337 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.290521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.304864 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.350863 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.356690 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.403355 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.480321 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.603777 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.608747 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.668655 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.742226 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.801455 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.931153 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 18:46:23 crc kubenswrapper[4770]: I0126 18:46:23.989851 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.069227 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.113993 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.130406 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.135600 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.234511 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.274431 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.382254 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.394468 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.424347 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.434597 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.471845 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.494224 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.545893 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.560173 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.623670 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.635491 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.646520 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.655657 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.686198 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.755360 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.844441 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 18:46:24 crc kubenswrapper[4770]: I0126 18:46:24.922912 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.070784 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.147976 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.158621 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.284277 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.325751 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.325770 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.404657 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.481361 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.487073 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6rk5z"] Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.492732 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.564056 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.567105 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.665535 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.692761 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.711084 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.718336 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.822678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.834249 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-6rk5z"] Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.881664 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 18:46:25 crc kubenswrapper[4770]: I0126 18:46:25.892118 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.015248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" event={"ID":"eaaba038-f0c5-42ab-9373-0a8d7c2728da","Type":"ContainerStarted","Data":"f89440d63b822b202401943b738ae247b4674cad627dd616212300411b7eada9"} Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.024961 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.065312 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.122830 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.216842 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.222723 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.227112 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.257311 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.264171 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.271343 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.283306 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.422395 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.469156 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.539348 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.560323 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.606119 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.738316 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.752000 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.785350 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.927129 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.930758 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.937833 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 18:46:26 crc kubenswrapper[4770]: I0126 18:46:26.953500 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.022101 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" event={"ID":"eaaba038-f0c5-42ab-9373-0a8d7c2728da","Type":"ContainerStarted","Data":"4104464c7d705cc7448903466068a6338cbb71f5a19bdcf0f74aea1b9a4e63fe"} Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.022485 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.030360 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.049231 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-96d6999f9-6rk5z" podStartSLOduration=53.049211425 podStartE2EDuration="53.049211425s" podCreationTimestamp="2026-01-26 18:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:46:27.044208208 +0000 UTC m=+271.609114950" watchObservedRunningTime="2026-01-26 18:46:27.049211425 +0000 UTC m=+271.614118177" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.133671 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.377830 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.392620 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.415267 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.486400 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.534769 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.626435 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.686373 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.781679 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.782879 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.850201 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 18:46:27 crc kubenswrapper[4770]: I0126 18:46:27.894739 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.043093 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.062277 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.074474 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.090122 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.175002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.210542 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.254298 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.293442 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.462730 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.474530 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.474816 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.508376 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.624433 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.705078 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.735890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.742093 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.753297 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.765251 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.967390 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 18:46:28 crc kubenswrapper[4770]: I0126 18:46:28.972240 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.022826 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.220733 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.255982 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.256408 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf" gracePeriod=5 Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.327068 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.387017 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.451110 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.462368 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.531469 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.579826 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.618048 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.716061 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.778803 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.787967 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.823441 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.879223 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 18:46:29 crc kubenswrapper[4770]: I0126 18:46:29.891941 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.013746 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.033369 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.097459 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.117115 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.146651 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.377149 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.470551 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.684348 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.840962 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.886437 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 18:46:30 crc kubenswrapper[4770]: I0126 18:46:30.978939 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.025976 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.074154 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.135048 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.405560 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.535131 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.616478 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 18:46:31 crc kubenswrapper[4770]: I0126 18:46:31.932965 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.079163 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.131544 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.527458 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.570417 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.570505 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.929571 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 18:46:32 crc kubenswrapper[4770]: I0126 18:46:32.957925 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.842082 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.842185 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964122 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964170 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964185 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964184 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964263 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964305 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964342 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964842 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964883 4770 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964910 4770 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.964937 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:34 crc kubenswrapper[4770]: I0126 18:46:34.975092 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.066434 4770 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.077082 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.077232 4770 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf" exitCode=137 Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.077322 4770 scope.go:117] "RemoveContainer" containerID="e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.077464 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.102830 4770 scope.go:117] "RemoveContainer" containerID="e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf" Jan 26 18:46:35 crc kubenswrapper[4770]: E0126 18:46:35.103356 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf\": container with ID starting with e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf not found: ID does not exist" containerID="e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.103431 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf"} err="failed to get container status \"e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf\": rpc error: code = NotFound desc = could not find container \"e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf\": container with ID starting with e510474251479793589880d4739709a980815a0409dbb79775c1109d00712bcf not found: ID does not exist" Jan 26 18:46:35 crc kubenswrapper[4770]: I0126 18:46:35.776596 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 18:46:50 crc kubenswrapper[4770]: I0126 18:46:50.178797 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 18:46:50 crc kubenswrapper[4770]: I0126 18:46:50.180415 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 18:46:50 crc kubenswrapper[4770]: I0126 18:46:50.180459 4770 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9aab48192ecfac2ab6369193b1c1d8170ba0abf33e90dd78058b97a62112a0da" exitCode=137 Jan 26 18:46:50 crc kubenswrapper[4770]: I0126 18:46:50.180486 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9aab48192ecfac2ab6369193b1c1d8170ba0abf33e90dd78058b97a62112a0da"} Jan 26 18:46:50 crc kubenswrapper[4770]: I0126 18:46:50.180517 4770 scope.go:117] "RemoveContainer" containerID="0ec9f557c1f3f3ef71aef905b843f96c6bc23fe513754370a1a5e92a398ef367" Jan 26 18:46:51 crc kubenswrapper[4770]: I0126 18:46:51.189301 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 18:46:51 crc kubenswrapper[4770]: I0126 18:46:51.190875 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92f733e4110f77427e5ae8732216a945e09cfcfd0cf8d6741be9ef0e98469793"} Jan 26 18:46:53 crc kubenswrapper[4770]: I0126 18:46:53.208126 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8026767-1e92-4355-9225-bb0679727208" containerID="abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e" exitCode=0 Jan 26 18:46:53 crc kubenswrapper[4770]: I0126 18:46:53.208193 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerDied","Data":"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e"} Jan 26 18:46:53 crc kubenswrapper[4770]: I0126 18:46:53.208816 4770 scope.go:117] "RemoveContainer" containerID="abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e" Jan 26 18:46:54 crc kubenswrapper[4770]: I0126 18:46:54.219373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerStarted","Data":"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6"} Jan 26 18:46:54 crc kubenswrapper[4770]: I0126 18:46:54.220116 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:46:54 crc kubenswrapper[4770]: I0126 18:46:54.221404 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:46:55 crc kubenswrapper[4770]: I0126 18:46:55.604576 4770 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 18:46:57 crc kubenswrapper[4770]: I0126 18:46:57.475660 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:46:59 crc kubenswrapper[4770]: I0126 18:46:59.860553 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:46:59 crc kubenswrapper[4770]: I0126 18:46:59.865127 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:47:00 crc kubenswrapper[4770]: I0126 18:47:00.269026 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 18:47:10 crc kubenswrapper[4770]: I0126 18:47:10.734048 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.465983 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.466162 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" podUID="a59b659e-3cc4-4463-9499-dfd40eec1d47" containerName="controller-manager" containerID="cri-o://48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518" gracePeriod=30 Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.479185 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.479417 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" podUID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" containerName="route-controller-manager" containerID="cri-o://ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268" gracePeriod=30 Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.833647 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.897162 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971115 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca\") pod \"a59b659e-3cc4-4463-9499-dfd40eec1d47\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971172 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config\") pod \"a59b659e-3cc4-4463-9499-dfd40eec1d47\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971250 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw947\" (UniqueName: \"kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947\") pod \"a59b659e-3cc4-4463-9499-dfd40eec1d47\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971297 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert\") pod \"a59b659e-3cc4-4463-9499-dfd40eec1d47\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971319 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles\") pod \"a59b659e-3cc4-4463-9499-dfd40eec1d47\" (UID: \"a59b659e-3cc4-4463-9499-dfd40eec1d47\") " Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.971685 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca" (OuterVolumeSpecName: "client-ca") pod "a59b659e-3cc4-4463-9499-dfd40eec1d47" (UID: "a59b659e-3cc4-4463-9499-dfd40eec1d47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.972088 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a59b659e-3cc4-4463-9499-dfd40eec1d47" (UID: "a59b659e-3cc4-4463-9499-dfd40eec1d47"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.972547 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config" (OuterVolumeSpecName: "config") pod "a59b659e-3cc4-4463-9499-dfd40eec1d47" (UID: "a59b659e-3cc4-4463-9499-dfd40eec1d47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.979024 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a59b659e-3cc4-4463-9499-dfd40eec1d47" (UID: "a59b659e-3cc4-4463-9499-dfd40eec1d47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:11 crc kubenswrapper[4770]: I0126 18:47:11.979154 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947" (OuterVolumeSpecName: "kube-api-access-jw947") pod "a59b659e-3cc4-4463-9499-dfd40eec1d47" (UID: "a59b659e-3cc4-4463-9499-dfd40eec1d47"). InnerVolumeSpecName "kube-api-access-jw947". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.071928 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert\") pod \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.071989 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config\") pod \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072133 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca\") pod \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072202 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgq8p\" (UniqueName: \"kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p\") pod \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\" (UID: \"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db\") " Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072430 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072454 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072466 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw947\" (UniqueName: \"kubernetes.io/projected/a59b659e-3cc4-4463-9499-dfd40eec1d47-kube-api-access-jw947\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072480 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a59b659e-3cc4-4463-9499-dfd40eec1d47-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072490 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a59b659e-3cc4-4463-9499-dfd40eec1d47-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072760 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" (UID: "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.072883 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config" (OuterVolumeSpecName: "config") pod "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" (UID: "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.075398 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" (UID: "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.077645 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p" (OuterVolumeSpecName: "kube-api-access-sgq8p") pod "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" (UID: "2d6475f7-5a18-43bd-bb55-c7d4a3bd33db"). InnerVolumeSpecName "kube-api-access-sgq8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.173870 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgq8p\" (UniqueName: \"kubernetes.io/projected/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-kube-api-access-sgq8p\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.173903 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.173912 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.173921 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.341854 4770 generic.go:334] "Generic (PLEG): container finished" podID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" containerID="ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268" exitCode=0 Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.341940 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" event={"ID":"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db","Type":"ContainerDied","Data":"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268"} Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.341957 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.341972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk" event={"ID":"2d6475f7-5a18-43bd-bb55-c7d4a3bd33db","Type":"ContainerDied","Data":"8089276758b33063c8d4c8e0288428a7463562c5e796b1d7a2d014c463d740ed"} Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.341994 4770 scope.go:117] "RemoveContainer" containerID="ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.344312 4770 generic.go:334] "Generic (PLEG): container finished" podID="a59b659e-3cc4-4463-9499-dfd40eec1d47" containerID="48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518" exitCode=0 Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.344336 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" event={"ID":"a59b659e-3cc4-4463-9499-dfd40eec1d47","Type":"ContainerDied","Data":"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518"} Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.344353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" event={"ID":"a59b659e-3cc4-4463-9499-dfd40eec1d47","Type":"ContainerDied","Data":"a70eadcd60c57799eec5efa3e561787ed4de47225d13bb47bd170501bc799eb8"} Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.344438 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-h8sjr" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.357163 4770 scope.go:117] "RemoveContainer" containerID="ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268" Jan 26 18:47:12 crc kubenswrapper[4770]: E0126 18:47:12.358131 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268\": container with ID starting with ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268 not found: ID does not exist" containerID="ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.358165 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268"} err="failed to get container status \"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268\": rpc error: code = NotFound desc = could not find container \"ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268\": container with ID starting with ff5a913dafa57f3c8fb1b2c4120444c86f02f5aabcc48113aaed4033f7bcd268 not found: ID does not exist" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.358189 4770 scope.go:117] "RemoveContainer" containerID="48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.375422 4770 scope.go:117] "RemoveContainer" containerID="48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518" Jan 26 18:47:12 crc kubenswrapper[4770]: E0126 18:47:12.375961 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518\": container with ID starting with 48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518 not found: ID does not exist" containerID="48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.375990 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518"} err="failed to get container status \"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518\": rpc error: code = NotFound desc = could not find container \"48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518\": container with ID starting with 48f42e374e3875ef06746baa5f082ff96eae5dd57c1d07e195bddc39c06f0518 not found: ID does not exist" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.382468 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.388384 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fvbpk"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.392524 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.395353 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-h8sjr"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.700544 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck"] Jan 26 18:47:12 crc kubenswrapper[4770]: E0126 18:47:12.700984 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701013 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:47:12 crc kubenswrapper[4770]: E0126 18:47:12.701043 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" containerName="route-controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701056 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" containerName="route-controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: E0126 18:47:12.701074 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59b659e-3cc4-4463-9499-dfd40eec1d47" containerName="controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701086 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59b659e-3cc4-4463-9499-dfd40eec1d47" containerName="controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701239 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701260 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" containerName="route-controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701288 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59b659e-3cc4-4463-9499-dfd40eec1d47" containerName="controller-manager" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.701938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.704487 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.704793 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.705005 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.705870 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.705871 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.706692 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.706904 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.706904 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709292 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709378 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709443 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709576 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709634 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.709646 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.718737 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.721400 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.724373 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.781763 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.781942 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-config\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-serving-cert\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782116 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7ql\" (UniqueName: \"kubernetes.io/projected/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-kube-api-access-zc7ql\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782218 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782368 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782457 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.782484 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-client-ca\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883563 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-client-ca\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883877 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-config\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.883967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-serving-cert\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.884060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7ql\" (UniqueName: \"kubernetes.io/projected/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-kube-api-access-zc7ql\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.884182 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.884339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.884671 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-client-ca\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.884957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.885226 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.885961 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-config\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.887756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.889459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.894482 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-serving-cert\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.902499 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz\") pod \"controller-manager-54464559b6-xchll\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:12 crc kubenswrapper[4770]: I0126 18:47:12.907773 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7ql\" (UniqueName: \"kubernetes.io/projected/b673c717-bbb8-408a-99c3-c3aa3eaa9d6c-kube-api-access-zc7ql\") pod \"route-controller-manager-8c4b4db85-vqsck\" (UID: \"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c\") " pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.072660 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.082203 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.297859 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck"] Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.341879 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:13 crc kubenswrapper[4770]: W0126 18:47:13.348498 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb6c1a6c_25f9_4c68_a442_e3442489dd94.slice/crio-efebde322f8fd67400f4e9d4ca8c37289c3e8c3520e9c5ac9408da5f778a2d6a WatchSource:0}: Error finding container efebde322f8fd67400f4e9d4ca8c37289c3e8c3520e9c5ac9408da5f778a2d6a: Status 404 returned error can't find the container with id efebde322f8fd67400f4e9d4ca8c37289c3e8c3520e9c5ac9408da5f778a2d6a Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.353506 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" event={"ID":"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c","Type":"ContainerStarted","Data":"fafa1f346ccd2f8067c6705e194ce49e411a81455d01cbd317abac5a643e0858"} Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.774081 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6475f7-5a18-43bd-bb55-c7d4a3bd33db" path="/var/lib/kubelet/pods/2d6475f7-5a18-43bd-bb55-c7d4a3bd33db/volumes" Jan 26 18:47:13 crc kubenswrapper[4770]: I0126 18:47:13.774942 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59b659e-3cc4-4463-9499-dfd40eec1d47" path="/var/lib/kubelet/pods/a59b659e-3cc4-4463-9499-dfd40eec1d47/volumes" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.360995 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" event={"ID":"cb6c1a6c-25f9-4c68-a442-e3442489dd94","Type":"ContainerStarted","Data":"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0"} Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.361397 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" event={"ID":"cb6c1a6c-25f9-4c68-a442-e3442489dd94","Type":"ContainerStarted","Data":"efebde322f8fd67400f4e9d4ca8c37289c3e8c3520e9c5ac9408da5f778a2d6a"} Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.361828 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.362194 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" event={"ID":"b673c717-bbb8-408a-99c3-c3aa3eaa9d6c","Type":"ContainerStarted","Data":"6ac53f6574ad33c3b91c72ac7dc00203b5c81bd4b7036a73ed0196de7e9fa2e0"} Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.362933 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.366529 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.370954 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.384882 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" podStartSLOduration=3.384865976 podStartE2EDuration="3.384865976s" podCreationTimestamp="2026-01-26 18:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:47:14.381343523 +0000 UTC m=+318.946250255" watchObservedRunningTime="2026-01-26 18:47:14.384865976 +0000 UTC m=+318.949772698" Jan 26 18:47:14 crc kubenswrapper[4770]: I0126 18:47:14.401617 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c4b4db85-vqsck" podStartSLOduration=3.40159967 podStartE2EDuration="3.40159967s" podCreationTimestamp="2026-01-26 18:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:47:14.397874759 +0000 UTC m=+318.962781511" watchObservedRunningTime="2026-01-26 18:47:14.40159967 +0000 UTC m=+318.966506412" Jan 26 18:47:35 crc kubenswrapper[4770]: I0126 18:47:35.968182 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzz8z"] Jan 26 18:47:35 crc kubenswrapper[4770]: I0126 18:47:35.969509 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:35 crc kubenswrapper[4770]: I0126 18:47:35.987535 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzz8z"] Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030802 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-tls\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030828 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhrt\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-kube-api-access-hbhrt\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030849 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-certificates\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-trusted-ca\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030913 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-bound-sa-token\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.030937 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.070087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-tls\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhrt\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-kube-api-access-hbhrt\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132279 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132309 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-certificates\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-trusted-ca\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132362 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-bound-sa-token\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.132419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.135587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.137805 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-certificates\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.138014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-registry-tls\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.138029 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-trusted-ca\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.138443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.152321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-bound-sa-token\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.155956 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhrt\" (UniqueName: \"kubernetes.io/projected/7d9b945d-0de1-4a9b-b759-6599a6dbfe46-kube-api-access-hbhrt\") pod \"image-registry-66df7c8f76-nzz8z\" (UID: \"7d9b945d-0de1-4a9b-b759-6599a6dbfe46\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.295097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:36 crc kubenswrapper[4770]: I0126 18:47:36.798837 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzz8z"] Jan 26 18:47:37 crc kubenswrapper[4770]: I0126 18:47:37.512975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" event={"ID":"7d9b945d-0de1-4a9b-b759-6599a6dbfe46","Type":"ContainerStarted","Data":"45b7b0d3650868495d0a1ee0ac1544d729d7beed0bfcafebccd9de64eb4d4409"} Jan 26 18:47:37 crc kubenswrapper[4770]: I0126 18:47:37.513886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" event={"ID":"7d9b945d-0de1-4a9b-b759-6599a6dbfe46","Type":"ContainerStarted","Data":"cac32d4e26bf14afb1752b677aeb4b814993f0c51d9c929f2c69d241f7442fdd"} Jan 26 18:47:37 crc kubenswrapper[4770]: I0126 18:47:37.513925 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:37 crc kubenswrapper[4770]: I0126 18:47:37.545596 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" podStartSLOduration=2.545573622 podStartE2EDuration="2.545573622s" podCreationTimestamp="2026-01-26 18:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:47:37.545210111 +0000 UTC m=+342.110116903" watchObservedRunningTime="2026-01-26 18:47:37.545573622 +0000 UTC m=+342.110480394" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.733726 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.734536 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tp628" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="registry-server" containerID="cri-o://ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429" gracePeriod=30 Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.748548 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.748996 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zbq6m" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="registry-server" containerID="cri-o://98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" gracePeriod=30 Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.755820 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.756013 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" containerID="cri-o://1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6" gracePeriod=30 Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.766848 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.767069 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dskv9" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="registry-server" containerID="cri-o://110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d" gracePeriod=30 Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.777778 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ttbbg"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.778530 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.783745 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.784027 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2l6x4" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="registry-server" containerID="cri-o://0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff" gracePeriod=30 Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.790262 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ttbbg"] Jan 26 18:47:41 crc kubenswrapper[4770]: E0126 18:47:41.813040 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be is running failed: container process not found" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:47:41 crc kubenswrapper[4770]: E0126 18:47:41.813446 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be is running failed: container process not found" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:47:41 crc kubenswrapper[4770]: E0126 18:47:41.813669 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be is running failed: container process not found" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 18:47:41 crc kubenswrapper[4770]: E0126 18:47:41.813725 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-zbq6m" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="registry-server" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.814187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.814235 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68zd\" (UniqueName: \"kubernetes.io/projected/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-kube-api-access-h68zd\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.814269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.916257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.916532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68zd\" (UniqueName: \"kubernetes.io/projected/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-kube-api-access-h68zd\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.916565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.918730 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.925122 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:41 crc kubenswrapper[4770]: I0126 18:47:41.937079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68zd\" (UniqueName: \"kubernetes.io/projected/38d944cd-c6cb-4cf6-ada9-9077a8b9102e-kube-api-access-h68zd\") pod \"marketplace-operator-79b997595-ttbbg\" (UID: \"38d944cd-c6cb-4cf6-ada9-9077a8b9102e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.101438 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.307320 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.308892 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423380 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities\") pod \"05fe33d7-6976-43c6-aa31-31751ac4f332\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423499 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-929kx\" (UniqueName: \"kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx\") pod \"05fe33d7-6976-43c6-aa31-31751ac4f332\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423556 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content\") pod \"46328d44-acf0-4a1f-86c9-c2c08d21640e\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423595 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85g72\" (UniqueName: \"kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72\") pod \"46328d44-acf0-4a1f-86c9-c2c08d21640e\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content\") pod \"05fe33d7-6976-43c6-aa31-31751ac4f332\" (UID: \"05fe33d7-6976-43c6-aa31-31751ac4f332\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.423673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities\") pod \"46328d44-acf0-4a1f-86c9-c2c08d21640e\" (UID: \"46328d44-acf0-4a1f-86c9-c2c08d21640e\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.426052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities" (OuterVolumeSpecName: "utilities") pod "46328d44-acf0-4a1f-86c9-c2c08d21640e" (UID: "46328d44-acf0-4a1f-86c9-c2c08d21640e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.426066 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities" (OuterVolumeSpecName: "utilities") pod "05fe33d7-6976-43c6-aa31-31751ac4f332" (UID: "05fe33d7-6976-43c6-aa31-31751ac4f332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.430539 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx" (OuterVolumeSpecName: "kube-api-access-929kx") pod "05fe33d7-6976-43c6-aa31-31751ac4f332" (UID: "05fe33d7-6976-43c6-aa31-31751ac4f332"). InnerVolumeSpecName "kube-api-access-929kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.431353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72" (OuterVolumeSpecName: "kube-api-access-85g72") pod "46328d44-acf0-4a1f-86c9-c2c08d21640e" (UID: "46328d44-acf0-4a1f-86c9-c2c08d21640e"). InnerVolumeSpecName "kube-api-access-85g72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.450428 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.494287 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525193 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities\") pod \"df32c63c-3381-4eff-8e21-969aaac5d74d\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525264 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content\") pod \"df32c63c-3381-4eff-8e21-969aaac5d74d\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content\") pod \"5ef61da5-d46a-4647-9372-2ef906bc7622\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525389 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlr85\" (UniqueName: \"kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85\") pod \"5ef61da5-d46a-4647-9372-2ef906bc7622\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525412 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities\") pod \"5ef61da5-d46a-4647-9372-2ef906bc7622\" (UID: \"5ef61da5-d46a-4647-9372-2ef906bc7622\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525427 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2lx\" (UniqueName: \"kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx\") pod \"df32c63c-3381-4eff-8e21-969aaac5d74d\" (UID: \"df32c63c-3381-4eff-8e21-969aaac5d74d\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525647 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85g72\" (UniqueName: \"kubernetes.io/projected/46328d44-acf0-4a1f-86c9-c2c08d21640e-kube-api-access-85g72\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525669 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525682 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.525744 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-929kx\" (UniqueName: \"kubernetes.io/projected/05fe33d7-6976-43c6-aa31-31751ac4f332-kube-api-access-929kx\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.526160 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities" (OuterVolumeSpecName: "utilities") pod "df32c63c-3381-4eff-8e21-969aaac5d74d" (UID: "df32c63c-3381-4eff-8e21-969aaac5d74d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.526357 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.527512 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05fe33d7-6976-43c6-aa31-31751ac4f332" (UID: "05fe33d7-6976-43c6-aa31-31751ac4f332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.527658 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities" (OuterVolumeSpecName: "utilities") pod "5ef61da5-d46a-4647-9372-2ef906bc7622" (UID: "5ef61da5-d46a-4647-9372-2ef906bc7622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.529533 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx" (OuterVolumeSpecName: "kube-api-access-hb2lx") pod "df32c63c-3381-4eff-8e21-969aaac5d74d" (UID: "df32c63c-3381-4eff-8e21-969aaac5d74d"). InnerVolumeSpecName "kube-api-access-hb2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.532816 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85" (OuterVolumeSpecName: "kube-api-access-hlr85") pod "5ef61da5-d46a-4647-9372-2ef906bc7622" (UID: "5ef61da5-d46a-4647-9372-2ef906bc7622"). InnerVolumeSpecName "kube-api-access-hlr85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.549521 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46328d44-acf0-4a1f-86c9-c2c08d21640e" (UID: "46328d44-acf0-4a1f-86c9-c2c08d21640e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.552191 4770 generic.go:334] "Generic (PLEG): container finished" podID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerID="110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d" exitCode=0 Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.552264 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerDied","Data":"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.552299 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dskv9" event={"ID":"5ef61da5-d46a-4647-9372-2ef906bc7622","Type":"ContainerDied","Data":"c2025a804a93a4e3060bb3567adcc43522b26e6cbf76380768dae3f95ceb7c8a"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.552319 4770 scope.go:117] "RemoveContainer" containerID="110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.552454 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dskv9" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.556895 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8026767-1e92-4355-9225-bb0679727208" containerID="1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6" exitCode=0 Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.556986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerDied","Data":"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.557010 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" event={"ID":"f8026767-1e92-4355-9225-bb0679727208","Type":"ContainerDied","Data":"5af3b8ef4b481dce41ead5eab6c1eaf5581978fa501857273d9af283e51d26f3"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.557092 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24pqv" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.559632 4770 generic.go:334] "Generic (PLEG): container finished" podID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerID="0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff" exitCode=0 Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.559735 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerDied","Data":"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.559756 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2l6x4" event={"ID":"df32c63c-3381-4eff-8e21-969aaac5d74d","Type":"ContainerDied","Data":"3401f94fb2feb7c8ff546d12e6440b7b3af1beb04a7227de4ec7b98f68d1867f"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.559860 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2l6x4" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.567180 4770 scope.go:117] "RemoveContainer" containerID="539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.567347 4770 generic.go:334] "Generic (PLEG): container finished" podID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" exitCode=0 Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.567412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerDied","Data":"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.567444 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbq6m" event={"ID":"05fe33d7-6976-43c6-aa31-31751ac4f332","Type":"ContainerDied","Data":"5ca1867aa2293232895859a0c1021af6c42355360ec6f4a1768d7c540f11ace5"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.567539 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbq6m" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.575797 4770 generic.go:334] "Generic (PLEG): container finished" podID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerID="ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429" exitCode=0 Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.575841 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerDied","Data":"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.575887 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tp628" event={"ID":"46328d44-acf0-4a1f-86c9-c2c08d21640e","Type":"ContainerDied","Data":"ab985e1ac0b282f86e5989a4d5f4e9bcf67c4d2341b54eba908b53a8b58d4470"} Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.575971 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tp628" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.577243 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ef61da5-d46a-4647-9372-2ef906bc7622" (UID: "5ef61da5-d46a-4647-9372-2ef906bc7622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.587875 4770 scope.go:117] "RemoveContainer" containerID="f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.598833 4770 scope.go:117] "RemoveContainer" containerID="110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.599345 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d\": container with ID starting with 110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d not found: ID does not exist" containerID="110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.599382 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d"} err="failed to get container status \"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d\": rpc error: code = NotFound desc = could not find container \"110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d\": container with ID starting with 110973cbf2194650645b1fd5c86677df34db7ee3509232b5836a5aceff94229d not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.599408 4770 scope.go:117] "RemoveContainer" containerID="539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.599907 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7\": container with ID starting with 539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7 not found: ID does not exist" containerID="539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.599954 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7"} err="failed to get container status \"539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7\": rpc error: code = NotFound desc = could not find container \"539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7\": container with ID starting with 539d8e353149097ab6f3195f6ea450cd543b3755ef2edc533b8f9ec4c8e98db7 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.599983 4770 scope.go:117] "RemoveContainer" containerID="f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.606142 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73\": container with ID starting with f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73 not found: ID does not exist" containerID="f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.606189 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73"} err="failed to get container status \"f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73\": rpc error: code = NotFound desc = could not find container \"f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73\": container with ID starting with f987caf195380d51526209afa20608dc4b1ee713a0a30b1ef02b7730f6d9ac73 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.606219 4770 scope.go:117] "RemoveContainer" containerID="1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.612738 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.620511 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zbq6m"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.625653 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.626643 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca\") pod \"f8026767-1e92-4355-9225-bb0679727208\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.626688 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics\") pod \"f8026767-1e92-4355-9225-bb0679727208\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.626787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhqh\" (UniqueName: \"kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh\") pod \"f8026767-1e92-4355-9225-bb0679727208\" (UID: \"f8026767-1e92-4355-9225-bb0679727208\") " Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627020 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fe33d7-6976-43c6-aa31-31751ac4f332-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627038 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlr85\" (UniqueName: \"kubernetes.io/projected/5ef61da5-d46a-4647-9372-2ef906bc7622-kube-api-access-hlr85\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627050 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627059 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb2lx\" (UniqueName: \"kubernetes.io/projected/df32c63c-3381-4eff-8e21-969aaac5d74d-kube-api-access-hb2lx\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627087 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627097 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ef61da5-d46a-4647-9372-2ef906bc7622-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.627107 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46328d44-acf0-4a1f-86c9-c2c08d21640e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.628856 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f8026767-1e92-4355-9225-bb0679727208" (UID: "f8026767-1e92-4355-9225-bb0679727208"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.629624 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tp628"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.630993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f8026767-1e92-4355-9225-bb0679727208" (UID: "f8026767-1e92-4355-9225-bb0679727208"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.632910 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ttbbg"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.633671 4770 scope.go:117] "RemoveContainer" containerID="abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.634719 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh" (OuterVolumeSpecName: "kube-api-access-jrhqh") pod "f8026767-1e92-4355-9225-bb0679727208" (UID: "f8026767-1e92-4355-9225-bb0679727208"). InnerVolumeSpecName "kube-api-access-jrhqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.647646 4770 scope.go:117] "RemoveContainer" containerID="1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.648272 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6\": container with ID starting with 1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6 not found: ID does not exist" containerID="1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.648313 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6"} err="failed to get container status \"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6\": rpc error: code = NotFound desc = could not find container \"1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6\": container with ID starting with 1282f44214ae3a95a54764e998c8e04417c2bcc48217948c5b73daae03ba91a6 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.648338 4770 scope.go:117] "RemoveContainer" containerID="abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.648597 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e\": container with ID starting with abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e not found: ID does not exist" containerID="abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.648614 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e"} err="failed to get container status \"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e\": rpc error: code = NotFound desc = could not find container \"abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e\": container with ID starting with abbd7fbe8d3d7d80b7fab3e7387ab2d4bf9946bd5cb031379dd1096bb7b4517e not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.648626 4770 scope.go:117] "RemoveContainer" containerID="0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.663738 4770 scope.go:117] "RemoveContainer" containerID="b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.669434 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df32c63c-3381-4eff-8e21-969aaac5d74d" (UID: "df32c63c-3381-4eff-8e21-969aaac5d74d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.681022 4770 scope.go:117] "RemoveContainer" containerID="7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.703328 4770 scope.go:117] "RemoveContainer" containerID="0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.704083 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff\": container with ID starting with 0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff not found: ID does not exist" containerID="0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.704137 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff"} err="failed to get container status \"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff\": rpc error: code = NotFound desc = could not find container \"0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff\": container with ID starting with 0030bede86447e7d37c8c29c2c7c3f0f170ea44d1407595ce8841f8d24f12dff not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.704173 4770 scope.go:117] "RemoveContainer" containerID="b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.704519 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca\": container with ID starting with b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca not found: ID does not exist" containerID="b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.704574 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca"} err="failed to get container status \"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca\": rpc error: code = NotFound desc = could not find container \"b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca\": container with ID starting with b2fbbaa1d0eb273bf5a5fc7702348514684e93eb6d71b735b483bcf940cad7ca not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.704603 4770 scope.go:117] "RemoveContainer" containerID="7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.704993 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c\": container with ID starting with 7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c not found: ID does not exist" containerID="7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.705035 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c"} err="failed to get container status \"7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c\": rpc error: code = NotFound desc = could not find container \"7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c\": container with ID starting with 7d73478e1811ac4144cb6b2e36067ebe4dd27356e927b20c7d2170ae530d402c not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.705057 4770 scope.go:117] "RemoveContainer" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.720356 4770 scope.go:117] "RemoveContainer" containerID="2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.727786 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhqh\" (UniqueName: \"kubernetes.io/projected/f8026767-1e92-4355-9225-bb0679727208-kube-api-access-jrhqh\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.727817 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df32c63c-3381-4eff-8e21-969aaac5d74d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.727831 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8026767-1e92-4355-9225-bb0679727208-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.727843 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f8026767-1e92-4355-9225-bb0679727208-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.748864 4770 scope.go:117] "RemoveContainer" containerID="4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.770203 4770 scope.go:117] "RemoveContainer" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.770672 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be\": container with ID starting with 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be not found: ID does not exist" containerID="98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.770731 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be"} err="failed to get container status \"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be\": rpc error: code = NotFound desc = could not find container \"98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be\": container with ID starting with 98250c2565ee3d32f7c37844d1a016256960adba55cb92854605f22b9001c0be not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.770757 4770 scope.go:117] "RemoveContainer" containerID="2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.771190 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4\": container with ID starting with 2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4 not found: ID does not exist" containerID="2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.771551 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4"} err="failed to get container status \"2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4\": rpc error: code = NotFound desc = could not find container \"2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4\": container with ID starting with 2d30535dfe138bd7b1a16915b960dcc8a22c4f1a2096396a67330fd3a8dd88d4 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.771591 4770 scope.go:117] "RemoveContainer" containerID="4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.772132 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad\": container with ID starting with 4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad not found: ID does not exist" containerID="4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.772166 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad"} err="failed to get container status \"4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad\": rpc error: code = NotFound desc = could not find container \"4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad\": container with ID starting with 4c97bf025e23a8266b0b74c6c931dfd294beddd29c38a2e7359aa6871760edad not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.772185 4770 scope.go:117] "RemoveContainer" containerID="ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.787105 4770 scope.go:117] "RemoveContainer" containerID="60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.800988 4770 scope.go:117] "RemoveContainer" containerID="ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.817101 4770 scope.go:117] "RemoveContainer" containerID="ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.817989 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429\": container with ID starting with ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429 not found: ID does not exist" containerID="ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.818027 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429"} err="failed to get container status \"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429\": rpc error: code = NotFound desc = could not find container \"ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429\": container with ID starting with ad5297bba2a4a333daa84c779be1befc093e9056ae4651e177c11a974c9f6429 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.818056 4770 scope.go:117] "RemoveContainer" containerID="60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.818395 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548\": container with ID starting with 60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548 not found: ID does not exist" containerID="60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.818420 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548"} err="failed to get container status \"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548\": rpc error: code = NotFound desc = could not find container \"60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548\": container with ID starting with 60e2d16e2ec5efcdc00ea6da28ad3cec9f909de0a663758ebee211698299b548 not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.818441 4770 scope.go:117] "RemoveContainer" containerID="ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a" Jan 26 18:47:42 crc kubenswrapper[4770]: E0126 18:47:42.818806 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a\": container with ID starting with ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a not found: ID does not exist" containerID="ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.818831 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a"} err="failed to get container status \"ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a\": rpc error: code = NotFound desc = could not find container \"ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a\": container with ID starting with ec4214ec5808d3ca6bd397d1588132de47793a52180a54756c51a0faaa0b352a not found: ID does not exist" Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.914315 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.918277 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24pqv"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.925684 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.929802 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dskv9"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.939138 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:47:42 crc kubenswrapper[4770]: I0126 18:47:42.941926 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2l6x4"] Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.585558 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" event={"ID":"38d944cd-c6cb-4cf6-ada9-9077a8b9102e","Type":"ContainerStarted","Data":"529729efd5893af9ee4adaa2a151e0b9bf3b02c0bffdbd8c2e255f6d14500b70"} Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.585604 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" event={"ID":"38d944cd-c6cb-4cf6-ada9-9077a8b9102e","Type":"ContainerStarted","Data":"1ab19fad25b198bdebe071e023c22382fee2f03f00e941ec5be81d2588b2e3b5"} Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.585900 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.588241 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.603525 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ttbbg" podStartSLOduration=2.603501234 podStartE2EDuration="2.603501234s" podCreationTimestamp="2026-01-26 18:47:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:47:43.599904884 +0000 UTC m=+348.164811646" watchObservedRunningTime="2026-01-26 18:47:43.603501234 +0000 UTC m=+348.168407986" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.774657 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" path="/var/lib/kubelet/pods/05fe33d7-6976-43c6-aa31-31751ac4f332/volumes" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.775391 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" path="/var/lib/kubelet/pods/46328d44-acf0-4a1f-86c9-c2c08d21640e/volumes" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.776055 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" path="/var/lib/kubelet/pods/5ef61da5-d46a-4647-9372-2ef906bc7622/volumes" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.777012 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" path="/var/lib/kubelet/pods/df32c63c-3381-4eff-8e21-969aaac5d74d/volumes" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.777576 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8026767-1e92-4355-9225-bb0679727208" path="/var/lib/kubelet/pods/f8026767-1e92-4355-9225-bb0679727208/volumes" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.956751 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k9258"] Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.956940 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.956952 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.956960 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.956966 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.956977 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.956983 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.956993 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.956998 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957006 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957011 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957019 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957025 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957033 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957038 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957044 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957050 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957056 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957062 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="extract-content" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957070 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957075 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957085 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957090 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957100 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957105 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="extract-utilities" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957113 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957119 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957199 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="df32c63c-3381-4eff-8e21-969aaac5d74d" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957208 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fe33d7-6976-43c6-aa31-31751ac4f332" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957217 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957226 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46328d44-acf0-4a1f-86c9-c2c08d21640e" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957233 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957241 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef61da5-d46a-4647-9372-2ef906bc7622" containerName="registry-server" Jan 26 18:47:43 crc kubenswrapper[4770]: E0126 18:47:43.957314 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.957321 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8026767-1e92-4355-9225-bb0679727208" containerName="marketplace-operator" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.958443 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.961852 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 18:47:43 crc kubenswrapper[4770]: I0126 18:47:43.968611 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9258"] Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.047048 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-catalog-content\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.047733 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-utilities\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.047894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mds\" (UniqueName: \"kubernetes.io/projected/d89b5e20-acef-49af-a137-a3a69b94cd1e-kube-api-access-x6mds\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.149321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-catalog-content\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.149396 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-utilities\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.149427 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mds\" (UniqueName: \"kubernetes.io/projected/d89b5e20-acef-49af-a137-a3a69b94cd1e-kube-api-access-x6mds\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.149897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-catalog-content\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.150106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89b5e20-acef-49af-a137-a3a69b94cd1e-utilities\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.160953 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.165803 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.168581 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.169844 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.173808 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mds\" (UniqueName: \"kubernetes.io/projected/d89b5e20-acef-49af-a137-a3a69b94cd1e-kube-api-access-x6mds\") pod \"redhat-marketplace-k9258\" (UID: \"d89b5e20-acef-49af-a137-a3a69b94cd1e\") " pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.250558 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.250724 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.250834 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chpsh\" (UniqueName: \"kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.277351 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.351770 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.351842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chpsh\" (UniqueName: \"kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.351932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.352361 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.352436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.381875 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chpsh\" (UniqueName: \"kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh\") pod \"community-operators-5mzq8\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.505441 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.673669 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9258"] Jan 26 18:47:44 crc kubenswrapper[4770]: W0126 18:47:44.678501 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd89b5e20_acef_49af_a137_a3a69b94cd1e.slice/crio-11d51a6463f3e7ffe906b3a34212939869cf86f524b848d5fcc089781b3cc16b WatchSource:0}: Error finding container 11d51a6463f3e7ffe906b3a34212939869cf86f524b848d5fcc089781b3cc16b: Status 404 returned error can't find the container with id 11d51a6463f3e7ffe906b3a34212939869cf86f524b848d5fcc089781b3cc16b Jan 26 18:47:44 crc kubenswrapper[4770]: I0126 18:47:44.962932 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 18:47:45 crc kubenswrapper[4770]: W0126 18:47:45.013412 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5265896d_8227_4910_a158_d447ed4139f4.slice/crio-d72469ddc55e7a5535edbaf00e1311c6277ff6b790df80f61fcf8c644969be39 WatchSource:0}: Error finding container d72469ddc55e7a5535edbaf00e1311c6277ff6b790df80f61fcf8c644969be39: Status 404 returned error can't find the container with id d72469ddc55e7a5535edbaf00e1311c6277ff6b790df80f61fcf8c644969be39 Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.604212 4770 generic.go:334] "Generic (PLEG): container finished" podID="5265896d-8227-4910-a158-d447ed4139f4" containerID="edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712" exitCode=0 Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.604268 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerDied","Data":"edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712"} Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.604661 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerStarted","Data":"d72469ddc55e7a5535edbaf00e1311c6277ff6b790df80f61fcf8c644969be39"} Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.606976 4770 generic.go:334] "Generic (PLEG): container finished" podID="d89b5e20-acef-49af-a137-a3a69b94cd1e" containerID="8179df544542c207dd46d9f85caafc132fe4288b7c61766af8833447ceee34e4" exitCode=0 Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.607040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9258" event={"ID":"d89b5e20-acef-49af-a137-a3a69b94cd1e","Type":"ContainerDied","Data":"8179df544542c207dd46d9f85caafc132fe4288b7c61766af8833447ceee34e4"} Jan 26 18:47:45 crc kubenswrapper[4770]: I0126 18:47:45.607074 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9258" event={"ID":"d89b5e20-acef-49af-a137-a3a69b94cd1e","Type":"ContainerStarted","Data":"11d51a6463f3e7ffe906b3a34212939869cf86f524b848d5fcc089781b3cc16b"} Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.352934 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7k8l4"] Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.354585 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.357364 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.366327 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k8l4"] Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.516687 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-utilities\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.516758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgvk\" (UniqueName: \"kubernetes.io/projected/727229ac-add6-4217-b9c8-b83ee24a8d11-kube-api-access-fbgvk\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.516780 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-catalog-content\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.552627 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g7flk"] Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.553564 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: W0126 18:47:46.555249 4770 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 26 18:47:46 crc kubenswrapper[4770]: E0126 18:47:46.555279 4770 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.571942 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7flk"] Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.612829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerStarted","Data":"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3"} Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.615312 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9258" event={"ID":"d89b5e20-acef-49af-a137-a3a69b94cd1e","Type":"ContainerStarted","Data":"978eafb36dd752caacf95ebaba8d6206ed9c0f35e4a10dccb4ae961c9086504f"} Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.617566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-utilities\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.617624 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgvk\" (UniqueName: \"kubernetes.io/projected/727229ac-add6-4217-b9c8-b83ee24a8d11-kube-api-access-fbgvk\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.617654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-catalog-content\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.618252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-catalog-content\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.618479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727229ac-add6-4217-b9c8-b83ee24a8d11-utilities\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.638258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgvk\" (UniqueName: \"kubernetes.io/projected/727229ac-add6-4217-b9c8-b83ee24a8d11-kube-api-access-fbgvk\") pod \"certified-operators-7k8l4\" (UID: \"727229ac-add6-4217-b9c8-b83ee24a8d11\") " pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.719396 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-utilities\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.719474 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlbm\" (UniqueName: \"kubernetes.io/projected/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-kube-api-access-4rlbm\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.719520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-catalog-content\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.757393 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.820579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-utilities\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.820663 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlbm\" (UniqueName: \"kubernetes.io/projected/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-kube-api-access-4rlbm\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.820735 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-catalog-content\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.821624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-catalog-content\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.821718 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-utilities\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:46 crc kubenswrapper[4770]: I0126 18:47:46.851428 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlbm\" (UniqueName: \"kubernetes.io/projected/e8ce5003-8637-4aaa-a35b-f8b6f9a04905-kube-api-access-4rlbm\") pod \"redhat-operators-g7flk\" (UID: \"e8ce5003-8637-4aaa-a35b-f8b6f9a04905\") " pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.165864 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7k8l4"] Jan 26 18:47:47 crc kubenswrapper[4770]: W0126 18:47:47.168667 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727229ac_add6_4217_b9c8_b83ee24a8d11.slice/crio-77736d638e37b8faec9d6a1bdc74ed2154ac7b950439bc20158a3003760188d8 WatchSource:0}: Error finding container 77736d638e37b8faec9d6a1bdc74ed2154ac7b950439bc20158a3003760188d8: Status 404 returned error can't find the container with id 77736d638e37b8faec9d6a1bdc74ed2154ac7b950439bc20158a3003760188d8 Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.623403 4770 generic.go:334] "Generic (PLEG): container finished" podID="5265896d-8227-4910-a158-d447ed4139f4" containerID="863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3" exitCode=0 Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.623484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerDied","Data":"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3"} Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.630740 4770 generic.go:334] "Generic (PLEG): container finished" podID="727229ac-add6-4217-b9c8-b83ee24a8d11" containerID="a2c85150fa7d6a282d46b03beaf245596ffff7836016456aeb23a6049fdddd76" exitCode=0 Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.630829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8l4" event={"ID":"727229ac-add6-4217-b9c8-b83ee24a8d11","Type":"ContainerDied","Data":"a2c85150fa7d6a282d46b03beaf245596ffff7836016456aeb23a6049fdddd76"} Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.630887 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8l4" event={"ID":"727229ac-add6-4217-b9c8-b83ee24a8d11","Type":"ContainerStarted","Data":"77736d638e37b8faec9d6a1bdc74ed2154ac7b950439bc20158a3003760188d8"} Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.633663 4770 generic.go:334] "Generic (PLEG): container finished" podID="d89b5e20-acef-49af-a137-a3a69b94cd1e" containerID="978eafb36dd752caacf95ebaba8d6206ed9c0f35e4a10dccb4ae961c9086504f" exitCode=0 Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.633768 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9258" event={"ID":"d89b5e20-acef-49af-a137-a3a69b94cd1e","Type":"ContainerDied","Data":"978eafb36dd752caacf95ebaba8d6206ed9c0f35e4a10dccb4ae961c9086504f"} Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.870854 4770 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-operators-g7flk" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.871560 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:47 crc kubenswrapper[4770]: I0126 18:47:47.890873 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.247979 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7flk"] Jan 26 18:47:48 crc kubenswrapper[4770]: W0126 18:47:48.254983 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ce5003_8637_4aaa_a35b_f8b6f9a04905.slice/crio-066dffbb19a65484fa63ebd95c3ca1c09acd10a9fdd3843ab8e2352cbf1c2432 WatchSource:0}: Error finding container 066dffbb19a65484fa63ebd95c3ca1c09acd10a9fdd3843ab8e2352cbf1c2432: Status 404 returned error can't find the container with id 066dffbb19a65484fa63ebd95c3ca1c09acd10a9fdd3843ab8e2352cbf1c2432 Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.645612 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerStarted","Data":"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f"} Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.647849 4770 generic.go:334] "Generic (PLEG): container finished" podID="e8ce5003-8637-4aaa-a35b-f8b6f9a04905" containerID="2d604a793328b5a916ef86d052c15163ebc4a00a8fbcac69d973e9f3d75a6d32" exitCode=0 Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.647911 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7flk" event={"ID":"e8ce5003-8637-4aaa-a35b-f8b6f9a04905","Type":"ContainerDied","Data":"2d604a793328b5a916ef86d052c15163ebc4a00a8fbcac69d973e9f3d75a6d32"} Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.647934 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7flk" event={"ID":"e8ce5003-8637-4aaa-a35b-f8b6f9a04905","Type":"ContainerStarted","Data":"066dffbb19a65484fa63ebd95c3ca1c09acd10a9fdd3843ab8e2352cbf1c2432"} Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.650467 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8l4" event={"ID":"727229ac-add6-4217-b9c8-b83ee24a8d11","Type":"ContainerStarted","Data":"36eedea7a42b70e899645fafbe086ffbb210c4ab0575fde75337d0526b9c208e"} Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.654565 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9258" event={"ID":"d89b5e20-acef-49af-a137-a3a69b94cd1e","Type":"ContainerStarted","Data":"78cf98b8e2819e296b998b13fbf7a8887988e7a9f0b761ece612628a70ac7c4e"} Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.670303 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5mzq8" podStartSLOduration=2.24882129 podStartE2EDuration="4.670287621s" podCreationTimestamp="2026-01-26 18:47:44 +0000 UTC" firstStartedPulling="2026-01-26 18:47:45.606626239 +0000 UTC m=+350.171532971" lastFinishedPulling="2026-01-26 18:47:48.02809257 +0000 UTC m=+352.592999302" observedRunningTime="2026-01-26 18:47:48.668679546 +0000 UTC m=+353.233586278" watchObservedRunningTime="2026-01-26 18:47:48.670287621 +0000 UTC m=+353.235194353" Jan 26 18:47:48 crc kubenswrapper[4770]: I0126 18:47:48.720993 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k9258" podStartSLOduration=3.273414918 podStartE2EDuration="5.720971939s" podCreationTimestamp="2026-01-26 18:47:43 +0000 UTC" firstStartedPulling="2026-01-26 18:47:45.608833581 +0000 UTC m=+350.173740343" lastFinishedPulling="2026-01-26 18:47:48.056390622 +0000 UTC m=+352.621297364" observedRunningTime="2026-01-26 18:47:48.718609384 +0000 UTC m=+353.283516116" watchObservedRunningTime="2026-01-26 18:47:48.720971939 +0000 UTC m=+353.285878671" Jan 26 18:47:49 crc kubenswrapper[4770]: I0126 18:47:49.660823 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7flk" event={"ID":"e8ce5003-8637-4aaa-a35b-f8b6f9a04905","Type":"ContainerStarted","Data":"9d6c26706ea0d0d0b5059d562aacd5e47332dca0087a11f3173020753cfe56e1"} Jan 26 18:47:49 crc kubenswrapper[4770]: I0126 18:47:49.662502 4770 generic.go:334] "Generic (PLEG): container finished" podID="727229ac-add6-4217-b9c8-b83ee24a8d11" containerID="36eedea7a42b70e899645fafbe086ffbb210c4ab0575fde75337d0526b9c208e" exitCode=0 Jan 26 18:47:49 crc kubenswrapper[4770]: I0126 18:47:49.663853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8l4" event={"ID":"727229ac-add6-4217-b9c8-b83ee24a8d11","Type":"ContainerDied","Data":"36eedea7a42b70e899645fafbe086ffbb210c4ab0575fde75337d0526b9c208e"} Jan 26 18:47:50 crc kubenswrapper[4770]: I0126 18:47:50.669855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7k8l4" event={"ID":"727229ac-add6-4217-b9c8-b83ee24a8d11","Type":"ContainerStarted","Data":"9f3e99aad86cf46309406362aa26daa7517145c4e67b28a915aa42d4192db7cc"} Jan 26 18:47:50 crc kubenswrapper[4770]: I0126 18:47:50.673789 4770 generic.go:334] "Generic (PLEG): container finished" podID="e8ce5003-8637-4aaa-a35b-f8b6f9a04905" containerID="9d6c26706ea0d0d0b5059d562aacd5e47332dca0087a11f3173020753cfe56e1" exitCode=0 Jan 26 18:47:50 crc kubenswrapper[4770]: I0126 18:47:50.673837 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7flk" event={"ID":"e8ce5003-8637-4aaa-a35b-f8b6f9a04905","Type":"ContainerDied","Data":"9d6c26706ea0d0d0b5059d562aacd5e47332dca0087a11f3173020753cfe56e1"} Jan 26 18:47:50 crc kubenswrapper[4770]: I0126 18:47:50.688867 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7k8l4" podStartSLOduration=2.261710528 podStartE2EDuration="4.688848998s" podCreationTimestamp="2026-01-26 18:47:46 +0000 UTC" firstStartedPulling="2026-01-26 18:47:47.633042125 +0000 UTC m=+352.197948867" lastFinishedPulling="2026-01-26 18:47:50.060180605 +0000 UTC m=+354.625087337" observedRunningTime="2026-01-26 18:47:50.685738401 +0000 UTC m=+355.250645133" watchObservedRunningTime="2026-01-26 18:47:50.688848998 +0000 UTC m=+355.253755730" Jan 26 18:47:53 crc kubenswrapper[4770]: I0126 18:47:53.689870 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7flk" event={"ID":"e8ce5003-8637-4aaa-a35b-f8b6f9a04905","Type":"ContainerStarted","Data":"240b38405f1a070f2df8c4f0d420d0eba6e1b0d8adbb0c41abccf49a7a66b8cc"} Jan 26 18:47:53 crc kubenswrapper[4770]: I0126 18:47:53.707805 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g7flk" podStartSLOduration=5.285563806 podStartE2EDuration="7.707789778s" podCreationTimestamp="2026-01-26 18:47:46 +0000 UTC" firstStartedPulling="2026-01-26 18:47:48.649267663 +0000 UTC m=+353.214174395" lastFinishedPulling="2026-01-26 18:47:51.071493635 +0000 UTC m=+355.636400367" observedRunningTime="2026-01-26 18:47:53.706219784 +0000 UTC m=+358.271126526" watchObservedRunningTime="2026-01-26 18:47:53.707789778 +0000 UTC m=+358.272696510" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.277737 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.277789 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.316397 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.505766 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.505811 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.551554 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.742322 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k9258" Jan 26 18:47:54 crc kubenswrapper[4770]: I0126 18:47:54.743043 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 18:47:56 crc kubenswrapper[4770]: I0126 18:47:56.301761 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nzz8z" Jan 26 18:47:56 crc kubenswrapper[4770]: I0126 18:47:56.360200 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:47:56 crc kubenswrapper[4770]: I0126 18:47:56.757488 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:56 crc kubenswrapper[4770]: I0126 18:47:56.757866 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:56 crc kubenswrapper[4770]: I0126 18:47:56.799269 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:57 crc kubenswrapper[4770]: I0126 18:47:57.668831 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:57 crc kubenswrapper[4770]: I0126 18:47:57.669042 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" podUID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" containerName="controller-manager" containerID="cri-o://1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0" gracePeriod=30 Jan 26 18:47:57 crc kubenswrapper[4770]: I0126 18:47:57.828048 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7k8l4" Jan 26 18:47:57 crc kubenswrapper[4770]: I0126 18:47:57.871852 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:57 crc kubenswrapper[4770]: I0126 18:47:57.872157 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.541738 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.678355 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca\") pod \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.678904 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles\") pod \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.678999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz\") pod \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.679101 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert\") pod \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.679186 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config\") pod \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\" (UID: \"cb6c1a6c-25f9-4c68-a442-e3442489dd94\") " Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.679456 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cb6c1a6c-25f9-4c68-a442-e3442489dd94" (UID: "cb6c1a6c-25f9-4c68-a442-e3442489dd94"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.679597 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.680249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config" (OuterVolumeSpecName: "config") pod "cb6c1a6c-25f9-4c68-a442-e3442489dd94" (UID: "cb6c1a6c-25f9-4c68-a442-e3442489dd94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.680400 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb6c1a6c-25f9-4c68-a442-e3442489dd94" (UID: "cb6c1a6c-25f9-4c68-a442-e3442489dd94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.687305 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz" (OuterVolumeSpecName: "kube-api-access-b6spz") pod "cb6c1a6c-25f9-4c68-a442-e3442489dd94" (UID: "cb6c1a6c-25f9-4c68-a442-e3442489dd94"). InnerVolumeSpecName "kube-api-access-b6spz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.689993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb6c1a6c-25f9-4c68-a442-e3442489dd94" (UID: "cb6c1a6c-25f9-4c68-a442-e3442489dd94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.716021 4770 generic.go:334] "Generic (PLEG): container finished" podID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" containerID="1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0" exitCode=0 Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.716394 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.716389 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" event={"ID":"cb6c1a6c-25f9-4c68-a442-e3442489dd94","Type":"ContainerDied","Data":"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0"} Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.716548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54464559b6-xchll" event={"ID":"cb6c1a6c-25f9-4c68-a442-e3442489dd94","Type":"ContainerDied","Data":"efebde322f8fd67400f4e9d4ca8c37289c3e8c3520e9c5ac9408da5f778a2d6a"} Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.716583 4770 scope.go:117] "RemoveContainer" containerID="1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.735937 4770 scope.go:117] "RemoveContainer" containerID="1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0" Jan 26 18:47:58 crc kubenswrapper[4770]: E0126 18:47:58.736427 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0\": container with ID starting with 1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0 not found: ID does not exist" containerID="1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.736463 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0"} err="failed to get container status \"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0\": rpc error: code = NotFound desc = could not find container \"1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0\": container with ID starting with 1af495a003aa4a4c77d290e1f798d7c60f51f62bfc82cf3fdb43c1842ea88cb0 not found: ID does not exist" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.738859 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf"] Jan 26 18:47:58 crc kubenswrapper[4770]: E0126 18:47:58.739179 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" containerName="controller-manager" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.739302 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" containerName="controller-manager" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.739433 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" containerName="controller-manager" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.739910 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.741661 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.741680 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.742869 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.743001 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.743154 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.747444 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.751890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.757803 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.765236 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf"] Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.781112 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54464559b6-xchll"] Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.781411 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.781443 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6spz\" (UniqueName: \"kubernetes.io/projected/cb6c1a6c-25f9-4c68-a442-e3442489dd94-kube-api-access-b6spz\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.781454 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb6c1a6c-25f9-4c68-a442-e3442489dd94-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.781464 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb6c1a6c-25f9-4c68-a442-e3442489dd94-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.882527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvvr\" (UniqueName: \"kubernetes.io/projected/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-kube-api-access-kfvvr\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.882608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-serving-cert\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.882636 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-config\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.882666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-client-ca\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.882712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-proxy-ca-bundles\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.907023 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g7flk" podUID="e8ce5003-8637-4aaa-a35b-f8b6f9a04905" containerName="registry-server" probeResult="failure" output=< Jan 26 18:47:58 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 18:47:58 crc kubenswrapper[4770]: > Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.984093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-proxy-ca-bundles\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.984517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvvr\" (UniqueName: \"kubernetes.io/projected/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-kube-api-access-kfvvr\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.984647 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-serving-cert\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.984765 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-config\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.984892 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-client-ca\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.985568 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-proxy-ca-bundles\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.985922 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-client-ca\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.986715 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-config\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:58 crc kubenswrapper[4770]: I0126 18:47:58.988152 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-serving-cert\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:59 crc kubenswrapper[4770]: I0126 18:47:59.012903 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvvr\" (UniqueName: \"kubernetes.io/projected/1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f-kube-api-access-kfvvr\") pod \"controller-manager-69fcc87cf9-vvrbf\" (UID: \"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f\") " pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:59 crc kubenswrapper[4770]: I0126 18:47:59.058295 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:47:59 crc kubenswrapper[4770]: I0126 18:47:59.520681 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf"] Jan 26 18:47:59 crc kubenswrapper[4770]: I0126 18:47:59.721673 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" event={"ID":"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f","Type":"ContainerStarted","Data":"2138b1e4dd95d32bad6b00d68ce953180b9e6dca920c7f1c15e8a627000f9233"} Jan 26 18:47:59 crc kubenswrapper[4770]: I0126 18:47:59.774999 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6c1a6c-25f9-4c68-a442-e3442489dd94" path="/var/lib/kubelet/pods/cb6c1a6c-25f9-4c68-a442-e3442489dd94/volumes" Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.330245 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.330575 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.743077 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" event={"ID":"1c612219-9c3d-4f5c-8ab5-f93e0ec2da5f","Type":"ContainerStarted","Data":"423adf405b36a8b785d8c126cbc0c12b1aba657b35053ad9f95b7a9981e92379"} Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.743338 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.748386 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" Jan 26 18:48:00 crc kubenswrapper[4770]: I0126 18:48:00.785000 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69fcc87cf9-vvrbf" podStartSLOduration=3.784983394 podStartE2EDuration="3.784983394s" podCreationTimestamp="2026-01-26 18:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:48:00.765867688 +0000 UTC m=+365.330774550" watchObservedRunningTime="2026-01-26 18:48:00.784983394 +0000 UTC m=+365.349890126" Jan 26 18:48:07 crc kubenswrapper[4770]: I0126 18:48:07.911393 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:48:07 crc kubenswrapper[4770]: I0126 18:48:07.964026 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g7flk" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.411513 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" podUID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" containerName="registry" containerID="cri-o://571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10" gracePeriod=30 Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.865338 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.877094 4770 generic.go:334] "Generic (PLEG): container finished" podID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" containerID="571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10" exitCode=0 Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.877174 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.877163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" event={"ID":"7acc36bb-6e6d-40cf-957f-82e0b5c50b59","Type":"ContainerDied","Data":"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10"} Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.877456 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pp4k8" event={"ID":"7acc36bb-6e6d-40cf-957f-82e0b5c50b59","Type":"ContainerDied","Data":"62eab0a33d7018edc5aa99bfe80d3bbae22c6ca9586e727930037158f6f40e50"} Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.877499 4770 scope.go:117] "RemoveContainer" containerID="571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.916550 4770 scope.go:117] "RemoveContainer" containerID="571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10" Jan 26 18:48:21 crc kubenswrapper[4770]: E0126 18:48:21.917359 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10\": container with ID starting with 571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10 not found: ID does not exist" containerID="571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.917449 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10"} err="failed to get container status \"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10\": rpc error: code = NotFound desc = could not find container \"571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10\": container with ID starting with 571eae4526a2d1de1fc80d1055c0935197ca956c9719f9f512a07b0906562d10 not found: ID does not exist" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934176 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934751 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934870 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxrfk\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.934921 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token\") pod \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\" (UID: \"7acc36bb-6e6d-40cf-957f-82e0b5c50b59\") " Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.935552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.936874 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.944801 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk" (OuterVolumeSpecName: "kube-api-access-lxrfk") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "kube-api-access-lxrfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.947843 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.948142 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.949467 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.949503 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 18:48:21 crc kubenswrapper[4770]: I0126 18:48:21.969670 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7acc36bb-6e6d-40cf-957f-82e0b5c50b59" (UID: "7acc36bb-6e6d-40cf-957f-82e0b5c50b59"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036445 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036514 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036541 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxrfk\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-kube-api-access-lxrfk\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036567 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036589 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036610 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.036629 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7acc36bb-6e6d-40cf-957f-82e0b5c50b59-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.238529 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:48:22 crc kubenswrapper[4770]: I0126 18:48:22.245304 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pp4k8"] Jan 26 18:48:23 crc kubenswrapper[4770]: I0126 18:48:23.776912 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" path="/var/lib/kubelet/pods/7acc36bb-6e6d-40cf-957f-82e0b5c50b59/volumes" Jan 26 18:48:30 crc kubenswrapper[4770]: I0126 18:48:30.330319 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:48:30 crc kubenswrapper[4770]: I0126 18:48:30.330683 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:49:00 crc kubenswrapper[4770]: I0126 18:49:00.331052 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:49:00 crc kubenswrapper[4770]: I0126 18:49:00.331624 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:49:00 crc kubenswrapper[4770]: I0126 18:49:00.331683 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:49:00 crc kubenswrapper[4770]: I0126 18:49:00.332351 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:49:00 crc kubenswrapper[4770]: I0126 18:49:00.332416 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9" gracePeriod=600 Jan 26 18:49:01 crc kubenswrapper[4770]: I0126 18:49:01.116287 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9" exitCode=0 Jan 26 18:49:01 crc kubenswrapper[4770]: I0126 18:49:01.116479 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9"} Jan 26 18:49:01 crc kubenswrapper[4770]: I0126 18:49:01.116588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9"} Jan 26 18:49:01 crc kubenswrapper[4770]: I0126 18:49:01.116603 4770 scope.go:117] "RemoveContainer" containerID="46b14d15e1c533a57968be276a2ea6c81e0a81b077245290cdd2acd05bff3573" Jan 26 18:51:00 crc kubenswrapper[4770]: I0126 18:51:00.331359 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:51:00 crc kubenswrapper[4770]: I0126 18:51:00.332070 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:51:30 crc kubenswrapper[4770]: I0126 18:51:30.330788 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:51:30 crc kubenswrapper[4770]: I0126 18:51:30.331544 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.331219 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.331666 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.331731 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.332295 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.332351 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9" gracePeriod=600 Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.597791 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9" exitCode=0 Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.597841 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9"} Jan 26 18:52:00 crc kubenswrapper[4770]: I0126 18:52:00.597881 4770 scope.go:117] "RemoveContainer" containerID="262471d88f35d197f54a30215bc979a01665f1c69b2a33190dca6d33020b72c9" Jan 26 18:52:01 crc kubenswrapper[4770]: I0126 18:52:01.614890 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee"} Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.871431 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xvnht"] Jan 26 18:53:27 crc kubenswrapper[4770]: E0126 18:53:27.872224 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" containerName="registry" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.872237 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" containerName="registry" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.872335 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acc36bb-6e6d-40cf-957f-82e0b5c50b59" containerName="registry" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.872747 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.875073 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bjxq6" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.875251 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.875458 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.877508 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-zg6pt"] Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.878358 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-zg6pt" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.879993 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kd8fl" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.891732 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xvnht"] Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.900435 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wbbh5"] Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.901733 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.904974 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2htnb" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.911239 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-zg6pt"] Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.914639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ljj\" (UniqueName: \"kubernetes.io/projected/f363eee1-0b76-4000-9ab2-8506a4ccb1db-kube-api-access-f8ljj\") pod \"cert-manager-webhook-687f57d79b-wbbh5\" (UID: \"f363eee1-0b76-4000-9ab2-8506a4ccb1db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.914721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkj4z\" (UniqueName: \"kubernetes.io/projected/5d309de3-0825-4929-9867-fdcd48df6320-kube-api-access-zkj4z\") pod \"cert-manager-cainjector-cf98fcc89-xvnht\" (UID: \"5d309de3-0825-4929-9867-fdcd48df6320\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.914752 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcgt\" (UniqueName: \"kubernetes.io/projected/5e274239-64f9-423e-a00b-0867c43ce747-kube-api-access-txcgt\") pod \"cert-manager-858654f9db-zg6pt\" (UID: \"5e274239-64f9-423e-a00b-0867c43ce747\") " pod="cert-manager/cert-manager-858654f9db-zg6pt" Jan 26 18:53:27 crc kubenswrapper[4770]: I0126 18:53:27.937307 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wbbh5"] Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.015873 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ljj\" (UniqueName: \"kubernetes.io/projected/f363eee1-0b76-4000-9ab2-8506a4ccb1db-kube-api-access-f8ljj\") pod \"cert-manager-webhook-687f57d79b-wbbh5\" (UID: \"f363eee1-0b76-4000-9ab2-8506a4ccb1db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.015941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txcgt\" (UniqueName: \"kubernetes.io/projected/5e274239-64f9-423e-a00b-0867c43ce747-kube-api-access-txcgt\") pod \"cert-manager-858654f9db-zg6pt\" (UID: \"5e274239-64f9-423e-a00b-0867c43ce747\") " pod="cert-manager/cert-manager-858654f9db-zg6pt" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.015963 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkj4z\" (UniqueName: \"kubernetes.io/projected/5d309de3-0825-4929-9867-fdcd48df6320-kube-api-access-zkj4z\") pod \"cert-manager-cainjector-cf98fcc89-xvnht\" (UID: \"5d309de3-0825-4929-9867-fdcd48df6320\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.033025 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txcgt\" (UniqueName: \"kubernetes.io/projected/5e274239-64f9-423e-a00b-0867c43ce747-kube-api-access-txcgt\") pod \"cert-manager-858654f9db-zg6pt\" (UID: \"5e274239-64f9-423e-a00b-0867c43ce747\") " pod="cert-manager/cert-manager-858654f9db-zg6pt" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.033493 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkj4z\" (UniqueName: \"kubernetes.io/projected/5d309de3-0825-4929-9867-fdcd48df6320-kube-api-access-zkj4z\") pod \"cert-manager-cainjector-cf98fcc89-xvnht\" (UID: \"5d309de3-0825-4929-9867-fdcd48df6320\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.033676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ljj\" (UniqueName: \"kubernetes.io/projected/f363eee1-0b76-4000-9ab2-8506a4ccb1db-kube-api-access-f8ljj\") pod \"cert-manager-webhook-687f57d79b-wbbh5\" (UID: \"f363eee1-0b76-4000-9ab2-8506a4ccb1db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.200056 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.206597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-zg6pt" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.231576 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.404790 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-zg6pt"] Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.413891 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:53:28 crc kubenswrapper[4770]: W0126 18:53:28.459788 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d309de3_0825_4929_9867_fdcd48df6320.slice/crio-458e198a4cd7fa91522bd3e85c73345031c46c12b4ebbf244cbe50c25e1016b2 WatchSource:0}: Error finding container 458e198a4cd7fa91522bd3e85c73345031c46c12b4ebbf244cbe50c25e1016b2: Status 404 returned error can't find the container with id 458e198a4cd7fa91522bd3e85c73345031c46c12b4ebbf244cbe50c25e1016b2 Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.460441 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xvnht"] Jan 26 18:53:28 crc kubenswrapper[4770]: I0126 18:53:28.486838 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wbbh5"] Jan 26 18:53:28 crc kubenswrapper[4770]: W0126 18:53:28.488428 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf363eee1_0b76_4000_9ab2_8506a4ccb1db.slice/crio-ec0e7f397158621d8fed5d88e8c95e0a60820dd681bd8df34f70c8a15265e995 WatchSource:0}: Error finding container ec0e7f397158621d8fed5d88e8c95e0a60820dd681bd8df34f70c8a15265e995: Status 404 returned error can't find the container with id ec0e7f397158621d8fed5d88e8c95e0a60820dd681bd8df34f70c8a15265e995 Jan 26 18:53:29 crc kubenswrapper[4770]: I0126 18:53:29.155647 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-zg6pt" event={"ID":"5e274239-64f9-423e-a00b-0867c43ce747","Type":"ContainerStarted","Data":"8862368cdf3ddd8105ba5fc28902033f4d55d6fef9b7a4c745668201fd67bb30"} Jan 26 18:53:29 crc kubenswrapper[4770]: I0126 18:53:29.158311 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" event={"ID":"f363eee1-0b76-4000-9ab2-8506a4ccb1db","Type":"ContainerStarted","Data":"ec0e7f397158621d8fed5d88e8c95e0a60820dd681bd8df34f70c8a15265e995"} Jan 26 18:53:29 crc kubenswrapper[4770]: I0126 18:53:29.159443 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" event={"ID":"5d309de3-0825-4929-9867-fdcd48df6320","Type":"ContainerStarted","Data":"458e198a4cd7fa91522bd3e85c73345031c46c12b4ebbf244cbe50c25e1016b2"} Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.182460 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" event={"ID":"f363eee1-0b76-4000-9ab2-8506a4ccb1db","Type":"ContainerStarted","Data":"3d665cac456b7c5edfaec8eba4391b21f92c5cabc75a43463fa3e3d7c139bc8a"} Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.182985 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.184190 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" event={"ID":"5d309de3-0825-4929-9867-fdcd48df6320","Type":"ContainerStarted","Data":"ca092dcd37caab9a771f285e11c65d6a63872a7cc53ae046913d1a8bd560827e"} Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.185990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-zg6pt" event={"ID":"5e274239-64f9-423e-a00b-0867c43ce747","Type":"ContainerStarted","Data":"44faaa3f90896fc3d79ba476374d1f84dc6e148761fd1339003fa6b349a16d11"} Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.199843 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" podStartSLOduration=2.275598292 podStartE2EDuration="6.199817081s" podCreationTimestamp="2026-01-26 18:53:27 +0000 UTC" firstStartedPulling="2026-01-26 18:53:28.490631366 +0000 UTC m=+693.055538098" lastFinishedPulling="2026-01-26 18:53:32.414850125 +0000 UTC m=+696.979756887" observedRunningTime="2026-01-26 18:53:33.199294148 +0000 UTC m=+697.764200890" watchObservedRunningTime="2026-01-26 18:53:33.199817081 +0000 UTC m=+697.764723823" Jan 26 18:53:33 crc kubenswrapper[4770]: I0126 18:53:33.225272 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xvnht" podStartSLOduration=2.2666964050000002 podStartE2EDuration="6.225251091s" podCreationTimestamp="2026-01-26 18:53:27 +0000 UTC" firstStartedPulling="2026-01-26 18:53:28.462203496 +0000 UTC m=+693.027110228" lastFinishedPulling="2026-01-26 18:53:32.420758142 +0000 UTC m=+696.985664914" observedRunningTime="2026-01-26 18:53:33.221385358 +0000 UTC m=+697.786292130" watchObservedRunningTime="2026-01-26 18:53:33.225251091 +0000 UTC m=+697.790157823" Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.947714 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-zg6pt" podStartSLOduration=6.961922598 podStartE2EDuration="10.947684001s" podCreationTimestamp="2026-01-26 18:53:27 +0000 UTC" firstStartedPulling="2026-01-26 18:53:28.413553746 +0000 UTC m=+692.978460488" lastFinishedPulling="2026-01-26 18:53:32.399315159 +0000 UTC m=+696.964221891" observedRunningTime="2026-01-26 18:53:33.251485942 +0000 UTC m=+697.816392664" watchObservedRunningTime="2026-01-26 18:53:37.947684001 +0000 UTC m=+702.512590733" Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.951195 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lgvzv"] Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.951562 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-controller" containerID="cri-o://bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.951911 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="sbdb" containerID="cri-o://530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.951955 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="nbdb" containerID="cri-o://f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.951988 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="northd" containerID="cri-o://7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.952016 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.952047 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-node" containerID="cri-o://7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153" gracePeriod=30 Jan 26 18:53:37 crc kubenswrapper[4770]: I0126 18:53:37.952073 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-acl-logging" containerID="cri-o://a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2" gracePeriod=30 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.012180 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" containerID="cri-o://3dbc66c1327f6362b589dffd636803e9bc715970fe8b65bf078d6ef91b2d88dd" gracePeriod=30 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.219027 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovnkube-controller/3.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.221291 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-acl-logging/0.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.221869 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-controller/0.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222358 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="3dbc66c1327f6362b589dffd636803e9bc715970fe8b65bf078d6ef91b2d88dd" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222384 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222394 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222403 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222413 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222424 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153" exitCode=0 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222432 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2" exitCode=143 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222443 4770 generic.go:334] "Generic (PLEG): container finished" podID="49551d69-752c-4bcd-b265-d98a3ec92838" containerID="bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82" exitCode=143 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"3dbc66c1327f6362b589dffd636803e9bc715970fe8b65bf078d6ef91b2d88dd"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222516 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222558 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222573 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222585 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222585 4770 scope.go:117] "RemoveContainer" containerID="df0f0614cc5b9b098a5168f57c57f95a792767605b6736b6e9feaf511676fd97" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" event={"ID":"49551d69-752c-4bcd-b265-d98a3ec92838","Type":"ContainerDied","Data":"c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.222621 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03694b9b6023e05648a8ce21790ebb2c9ea87c84aeca356255ca2de9c56fcf0" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.224783 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/2.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.225525 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/1.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.225569 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf1d4063-db34-411a-bdbc-3736acf7f126" containerID="1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845" exitCode=2 Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.225593 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerDied","Data":"1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845"} Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.226159 4770 scope.go:117] "RemoveContainer" containerID="1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.226424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-f87gd_openshift-multus(cf1d4063-db34-411a-bdbc-3736acf7f126)\"" pod="openshift-multus/multus-f87gd" podUID="cf1d4063-db34-411a-bdbc-3736acf7f126" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.236227 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wbbh5" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.254460 4770 scope.go:117] "RemoveContainer" containerID="7d649e52f86c57750db9b86eba65dfd84a7ae008f37c143d7633d89273394ba0" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.262045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-acl-logging/0.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.262907 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-controller/0.log" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.265521 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329343 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l28tp"] Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329785 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329807 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329820 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329828 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329842 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329850 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329862 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329869 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329882 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329890 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329904 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="nbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329911 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="nbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329924 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-node" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329933 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-node" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329941 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="northd" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329948 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="northd" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329962 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329969 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329978 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kubecfg-setup" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.329985 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kubecfg-setup" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.329996 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-acl-logging" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330003 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-acl-logging" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.330015 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="sbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330023 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="sbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330143 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="nbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330155 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-node" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330168 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="sbdb" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330181 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330193 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330204 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330216 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330228 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="northd" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330239 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330253 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovn-acl-logging" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330266 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: E0126 18:53:38.330389 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.330399 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.334017 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" containerName="ovnkube-controller" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.336046 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362035 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362094 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362134 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362169 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362243 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362268 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362294 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362307 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash" (OuterVolumeSpecName: "host-slash") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362315 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log" (OuterVolumeSpecName: "node-log") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362356 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362365 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket" (OuterVolumeSpecName: "log-socket") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362317 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362417 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362648 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362677 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362686 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362742 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362734 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362777 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362820 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8r7\" (UniqueName: \"kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362818 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362844 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362887 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362898 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362915 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362926 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362952 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362965 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.362996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363034 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch\") pod \"49551d69-752c-4bcd-b265-d98a3ec92838\" (UID: \"49551d69-752c-4bcd-b265-d98a3ec92838\") " Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363030 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363070 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363161 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-netns\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363373 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363379 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-var-lib-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363459 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-systemd-units\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363593 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-netd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363624 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-config\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363658 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-env-overrides\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363764 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf94\" (UniqueName: \"kubernetes.io/projected/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-kube-api-access-cdf94\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363810 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovn-node-metrics-cert\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-node-log\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363881 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-slash\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-ovn\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363933 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363960 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-bin\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.363988 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-script-lib\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-systemd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-kubelet\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-log-socket\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364134 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-etc-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364160 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364225 4770 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364243 4770 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364260 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364277 4770 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364292 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364307 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364321 4770 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364336 4770 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364353 4770 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364368 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364382 4770 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364396 4770 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364411 4770 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364425 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364438 4770 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364451 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49551d69-752c-4bcd-b265-d98a3ec92838-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.364465 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.390384 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.390460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7" (OuterVolumeSpecName: "kube-api-access-rg8r7") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "kube-api-access-rg8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.390913 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "49551d69-752c-4bcd-b265-d98a3ec92838" (UID: "49551d69-752c-4bcd-b265-d98a3ec92838"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465934 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-systemd-units\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465951 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-netd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-config\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-env-overrides\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.465995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-systemd-units\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466004 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf94\" (UniqueName: \"kubernetes.io/projected/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-kube-api-access-cdf94\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466125 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-netd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466170 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovn-node-metrics-cert\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466199 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-node-log\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466237 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-slash\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-ovn\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-node-log\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-slash\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466302 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-ovn\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466326 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-bin\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466342 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-cni-bin\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-script-lib\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-systemd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-kubelet\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466491 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-run-systemd\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-log-socket\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466519 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-log-socket\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466533 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-etc-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.466552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470846 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-env-overrides\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470878 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-netns\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470930 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470953 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-etc-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470978 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-var-lib-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-var-lib-openvswitch\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.471161 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8r7\" (UniqueName: \"kubernetes.io/projected/49551d69-752c-4bcd-b265-d98a3ec92838-kube-api-access-rg8r7\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.471194 4770 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49551d69-752c-4bcd-b265-d98a3ec92838-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.471211 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49551d69-752c-4bcd-b265-d98a3ec92838-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.471230 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-script-lib\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.471331 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovnkube-config\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.476964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-kubelet\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.480122 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-ovn-node-metrics-cert\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.470984 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-host-run-netns\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.486869 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf94\" (UniqueName: \"kubernetes.io/projected/8c7e16ce-f6ce-48d3-a332-4fc293fb125c-kube-api-access-cdf94\") pod \"ovnkube-node-l28tp\" (UID: \"8c7e16ce-f6ce-48d3-a332-4fc293fb125c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:38 crc kubenswrapper[4770]: I0126 18:53:38.660433 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.234321 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/2.log" Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.241144 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-acl-logging/0.log" Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.241796 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lgvzv_49551d69-752c-4bcd-b265-d98a3ec92838/ovn-controller/0.log" Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.242619 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lgvzv" Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.244592 4770 generic.go:334] "Generic (PLEG): container finished" podID="8c7e16ce-f6ce-48d3-a332-4fc293fb125c" containerID="7ecc7a7b997dcb9f4891f614bc762b51774c981c6965dd8c8dcac41c23f3246c" exitCode=0 Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.244638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerDied","Data":"7ecc7a7b997dcb9f4891f614bc762b51774c981c6965dd8c8dcac41c23f3246c"} Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.244670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"1259247b6fae69d2da2e4bd8fc1818f8f9e5607432b86e52f7c5b8453e356709"} Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.306182 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lgvzv"] Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.309606 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lgvzv"] Jan 26 18:53:39 crc kubenswrapper[4770]: I0126 18:53:39.775587 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49551d69-752c-4bcd-b265-d98a3ec92838" path="/var/lib/kubelet/pods/49551d69-752c-4bcd-b265-d98a3ec92838/volumes" Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.252936 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"8cddb52372ed2736e11951eb311d5d65dc76a44eb3fa853c8dbf855f6a1025c7"} Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.252970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"d10a5cc936fed4770a6f67e18ea2f33a79ac36fafad25e75355d4aa30721f249"} Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.252985 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"c2eee826b6a0edec7f086b71529ba22e427297cd7756ad3c134fcd4bfe3d4c74"} Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.252994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"36f3885b93047f5d59c99eca0f153d9e0e16f7d73ff8e9524aca08cdc2629af3"} Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.253003 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"38c565843bfd2e9dd9c9b495953f5d7c495d14709a828303dbbe3be3856179e4"} Jan 26 18:53:40 crc kubenswrapper[4770]: I0126 18:53:40.253013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"d75de26f2f81fb67fa12e255f84d4ceeb1d7fbff807514149d69e6d5f66fc9d1"} Jan 26 18:53:42 crc kubenswrapper[4770]: I0126 18:53:42.269814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"9178f84eaa8aba2c694da5d425905eacd27fddac81e7529b3220ec98798ab082"} Jan 26 18:53:45 crc kubenswrapper[4770]: I0126 18:53:45.294592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" event={"ID":"8c7e16ce-f6ce-48d3-a332-4fc293fb125c","Type":"ContainerStarted","Data":"a8583e15969bce4654c7f22e0dd7d71b02b0550411e5315054e38e9495baad29"} Jan 26 18:53:45 crc kubenswrapper[4770]: I0126 18:53:45.295040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:45 crc kubenswrapper[4770]: I0126 18:53:45.295062 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:45 crc kubenswrapper[4770]: I0126 18:53:45.330966 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" podStartSLOduration=7.330945299 podStartE2EDuration="7.330945299s" podCreationTimestamp="2026-01-26 18:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:53:45.326986005 +0000 UTC m=+709.891892737" watchObservedRunningTime="2026-01-26 18:53:45.330945299 +0000 UTC m=+709.895852031" Jan 26 18:53:45 crc kubenswrapper[4770]: I0126 18:53:45.332360 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:46 crc kubenswrapper[4770]: I0126 18:53:46.303671 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:46 crc kubenswrapper[4770]: I0126 18:53:46.339892 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:53:51 crc kubenswrapper[4770]: I0126 18:53:51.767107 4770 scope.go:117] "RemoveContainer" containerID="1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845" Jan 26 18:53:51 crc kubenswrapper[4770]: E0126 18:53:51.767974 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-f87gd_openshift-multus(cf1d4063-db34-411a-bdbc-3736acf7f126)\"" pod="openshift-multus/multus-f87gd" podUID="cf1d4063-db34-411a-bdbc-3736acf7f126" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.045367 4770 scope.go:117] "RemoveContainer" containerID="a689f5cfa49a89351256e9d579662ece63a7c8a48ce088dcc968b0599ebca2e2" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.070068 4770 scope.go:117] "RemoveContainer" containerID="bf3b32b49db6a74a78ccfff6f9c12e175356cc91e2ccef4ba2e3e0c94b4f8f82" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.089989 4770 scope.go:117] "RemoveContainer" containerID="7df45f2e51c551ea1148930415e349b71d20fe47dfa1faed80c13fb9806d2028" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.123037 4770 scope.go:117] "RemoveContainer" containerID="7ba125b46f2d40d0ebc97ee17fcd649ac04cb75e2a80d9e798c0e592e6d8f153" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.143549 4770 scope.go:117] "RemoveContainer" containerID="3dbc66c1327f6362b589dffd636803e9bc715970fe8b65bf078d6ef91b2d88dd" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.159887 4770 scope.go:117] "RemoveContainer" containerID="530034cc79e06266e0acb4d250427218c7d046976ffaf16e325f179def1a5c4f" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.181814 4770 scope.go:117] "RemoveContainer" containerID="ccb111919bd98c812ba9937afb41ed5b51c6f992e4b51df86637a745eb5dc6d7" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.202683 4770 scope.go:117] "RemoveContainer" containerID="1c446ff3ecd59c1d974dc855ca77d9c7af005dfc6a39da23222dc3e8bef6bb0b" Jan 26 18:53:56 crc kubenswrapper[4770]: I0126 18:53:56.219121 4770 scope.go:117] "RemoveContainer" containerID="f096f4c83bc38106cad270cb4e75a4b30296697c6d93f78c203975b3352a01a7" Jan 26 18:54:00 crc kubenswrapper[4770]: I0126 18:54:00.330736 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:54:00 crc kubenswrapper[4770]: I0126 18:54:00.331213 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:54:05 crc kubenswrapper[4770]: I0126 18:54:05.770019 4770 scope.go:117] "RemoveContainer" containerID="1c9be738ad7c937d32afeacfb09c00e68ba897b2b18ad8e2781db0f5eabbf845" Jan 26 18:54:06 crc kubenswrapper[4770]: I0126 18:54:06.423577 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-f87gd_cf1d4063-db34-411a-bdbc-3736acf7f126/kube-multus/2.log" Jan 26 18:54:06 crc kubenswrapper[4770]: I0126 18:54:06.423920 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-f87gd" event={"ID":"cf1d4063-db34-411a-bdbc-3736acf7f126","Type":"ContainerStarted","Data":"449018ab4ffa639961697b1cd4e3993cc67f661fc58926f8378b02bf549de44f"} Jan 26 18:54:08 crc kubenswrapper[4770]: I0126 18:54:08.691410 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l28tp" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.632215 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d"] Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.634597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.639298 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d"] Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.639316 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.688668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmtdk\" (UniqueName: \"kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.688805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.688841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.790451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmtdk\" (UniqueName: \"kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.790575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.790610 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.791386 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.791419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.814403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmtdk\" (UniqueName: \"kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:09 crc kubenswrapper[4770]: I0126 18:54:09.965203 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:10 crc kubenswrapper[4770]: I0126 18:54:10.159572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d"] Jan 26 18:54:10 crc kubenswrapper[4770]: I0126 18:54:10.445434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerStarted","Data":"69142cabeb9b46ebe2cbb970311d28219a77d00081e82a80d296e2ae0503bd3d"} Jan 26 18:54:10 crc kubenswrapper[4770]: I0126 18:54:10.445485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerStarted","Data":"39083b3026fe0574ee93c8a2646e8e14de02af13509de93f92de0563d2f1276f"} Jan 26 18:54:11 crc kubenswrapper[4770]: I0126 18:54:11.453921 4770 generic.go:334] "Generic (PLEG): container finished" podID="aaefc356-416c-4919-adb1-de98e007e7a1" containerID="69142cabeb9b46ebe2cbb970311d28219a77d00081e82a80d296e2ae0503bd3d" exitCode=0 Jan 26 18:54:11 crc kubenswrapper[4770]: I0126 18:54:11.453970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerDied","Data":"69142cabeb9b46ebe2cbb970311d28219a77d00081e82a80d296e2ae0503bd3d"} Jan 26 18:54:14 crc kubenswrapper[4770]: I0126 18:54:14.480038 4770 generic.go:334] "Generic (PLEG): container finished" podID="aaefc356-416c-4919-adb1-de98e007e7a1" containerID="338325d264677399b666c7d239d08c16b5549e6a5392b20e4a5e6287dbf11762" exitCode=0 Jan 26 18:54:14 crc kubenswrapper[4770]: I0126 18:54:14.480081 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerDied","Data":"338325d264677399b666c7d239d08c16b5549e6a5392b20e4a5e6287dbf11762"} Jan 26 18:54:15 crc kubenswrapper[4770]: I0126 18:54:15.487914 4770 generic.go:334] "Generic (PLEG): container finished" podID="aaefc356-416c-4919-adb1-de98e007e7a1" containerID="443531a592f0883cdb43ebfba8a4facd37474587c48c88b8ed154c9900df3476" exitCode=0 Jan 26 18:54:15 crc kubenswrapper[4770]: I0126 18:54:15.488905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerDied","Data":"443531a592f0883cdb43ebfba8a4facd37474587c48c88b8ed154c9900df3476"} Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.740155 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.777635 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util\") pod \"aaefc356-416c-4919-adb1-de98e007e7a1\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.777715 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle\") pod \"aaefc356-416c-4919-adb1-de98e007e7a1\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.777762 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmtdk\" (UniqueName: \"kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk\") pod \"aaefc356-416c-4919-adb1-de98e007e7a1\" (UID: \"aaefc356-416c-4919-adb1-de98e007e7a1\") " Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.780051 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle" (OuterVolumeSpecName: "bundle") pod "aaefc356-416c-4919-adb1-de98e007e7a1" (UID: "aaefc356-416c-4919-adb1-de98e007e7a1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.784354 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk" (OuterVolumeSpecName: "kube-api-access-xmtdk") pod "aaefc356-416c-4919-adb1-de98e007e7a1" (UID: "aaefc356-416c-4919-adb1-de98e007e7a1"). InnerVolumeSpecName "kube-api-access-xmtdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.788123 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util" (OuterVolumeSpecName: "util") pod "aaefc356-416c-4919-adb1-de98e007e7a1" (UID: "aaefc356-416c-4919-adb1-de98e007e7a1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.879522 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.879556 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aaefc356-416c-4919-adb1-de98e007e7a1-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:16 crc kubenswrapper[4770]: I0126 18:54:16.879583 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmtdk\" (UniqueName: \"kubernetes.io/projected/aaefc356-416c-4919-adb1-de98e007e7a1-kube-api-access-xmtdk\") on node \"crc\" DevicePath \"\"" Jan 26 18:54:17 crc kubenswrapper[4770]: I0126 18:54:17.504670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" event={"ID":"aaefc356-416c-4919-adb1-de98e007e7a1","Type":"ContainerDied","Data":"39083b3026fe0574ee93c8a2646e8e14de02af13509de93f92de0563d2f1276f"} Jan 26 18:54:17 crc kubenswrapper[4770]: I0126 18:54:17.504971 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39083b3026fe0574ee93c8a2646e8e14de02af13509de93f92de0563d2f1276f" Jan 26 18:54:17 crc kubenswrapper[4770]: I0126 18:54:17.504773 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.226502 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k"] Jan 26 18:54:27 crc kubenswrapper[4770]: E0126 18:54:27.229566 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="pull" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.229582 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="pull" Jan 26 18:54:27 crc kubenswrapper[4770]: E0126 18:54:27.229605 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="extract" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.229612 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="extract" Jan 26 18:54:27 crc kubenswrapper[4770]: E0126 18:54:27.229623 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="util" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.229629 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="util" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.229760 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaefc356-416c-4919-adb1-de98e007e7a1" containerName="extract" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.230230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.233417 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.233424 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-v59p4" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.239764 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.241044 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.308196 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4ltf\" (UniqueName: \"kubernetes.io/projected/3856ceb2-87c8-4db0-bbb8-66cf7713accc-kube-api-access-f4ltf\") pod \"obo-prometheus-operator-68bc856cb9-fhb9k\" (UID: \"3856ceb2-87c8-4db0-bbb8-66cf7713accc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.409948 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4ltf\" (UniqueName: \"kubernetes.io/projected/3856ceb2-87c8-4db0-bbb8-66cf7713accc-kube-api-access-f4ltf\") pod \"obo-prometheus-operator-68bc856cb9-fhb9k\" (UID: \"3856ceb2-87c8-4db0-bbb8-66cf7713accc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.417844 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.418523 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.420248 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.422419 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cfkhp" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.434300 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.435176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.440901 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.450001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.451180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4ltf\" (UniqueName: \"kubernetes.io/projected/3856ceb2-87c8-4db0-bbb8-66cf7713accc-kube-api-access-f4ltf\") pod \"obo-prometheus-operator-68bc856cb9-fhb9k\" (UID: \"3856ceb2-87c8-4db0-bbb8-66cf7713accc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.511405 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.511451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.511484 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.511618 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.552653 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.563270 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kgxzc"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.564270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.571009 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.571332 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gpxvc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613567 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ccp\" (UniqueName: \"kubernetes.io/projected/5660d99f-cacd-4602-83a8-e6e152380afc-kube-api-access-95ccp\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.613689 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5660d99f-cacd-4602-83a8-e6e152380afc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.620237 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.620319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2308db67-1c3e-465c-8574-58fe145f34e4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5\" (UID: \"2308db67-1c3e-465c-8574-58fe145f34e4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.620321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.620533 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2d01f9de-1cce-41c6-9a48-914289d32207-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js\" (UID: \"2d01f9de-1cce-41c6-9a48-914289d32207\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.633605 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kgxzc"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.714575 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5660d99f-cacd-4602-83a8-e6e152380afc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.714991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ccp\" (UniqueName: \"kubernetes.io/projected/5660d99f-cacd-4602-83a8-e6e152380afc-kube-api-access-95ccp\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.721430 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5660d99f-cacd-4602-83a8-e6e152380afc-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.737986 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.739277 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ccp\" (UniqueName: \"kubernetes.io/projected/5660d99f-cacd-4602-83a8-e6e152380afc-kube-api-access-95ccp\") pod \"observability-operator-59bdc8b94-kgxzc\" (UID: \"5660d99f-cacd-4602-83a8-e6e152380afc\") " pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.750985 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.791675 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gjmw8"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.792558 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gjmw8"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.792677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.798769 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-659hj" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.815766 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d294f34-81c6-46f1-9fa0-5950a2a7337f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.815866 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mps2w\" (UniqueName: \"kubernetes.io/projected/1d294f34-81c6-46f1-9fa0-5950a2a7337f-kube-api-access-mps2w\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.851153 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k"] Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.919944 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d294f34-81c6-46f1-9fa0-5950a2a7337f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.920019 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mps2w\" (UniqueName: \"kubernetes.io/projected/1d294f34-81c6-46f1-9fa0-5950a2a7337f-kube-api-access-mps2w\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.924138 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d294f34-81c6-46f1-9fa0-5950a2a7337f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.943444 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mps2w\" (UniqueName: \"kubernetes.io/projected/1d294f34-81c6-46f1-9fa0-5950a2a7337f-kube-api-access-mps2w\") pod \"perses-operator-5bf474d74f-gjmw8\" (UID: \"1d294f34-81c6-46f1-9fa0-5950a2a7337f\") " pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:27 crc kubenswrapper[4770]: I0126 18:54:27.955810 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.113064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.128031 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js"] Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.193979 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5"] Jan 26 18:54:28 crc kubenswrapper[4770]: W0126 18:54:28.202477 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2308db67_1c3e_465c_8574_58fe145f34e4.slice/crio-1559fcb196625a09883c113d9ea275b6a70cc23361fc93b8770c6130fccbd6c9 WatchSource:0}: Error finding container 1559fcb196625a09883c113d9ea275b6a70cc23361fc93b8770c6130fccbd6c9: Status 404 returned error can't find the container with id 1559fcb196625a09883c113d9ea275b6a70cc23361fc93b8770c6130fccbd6c9 Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.504686 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kgxzc"] Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.570210 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" event={"ID":"2d01f9de-1cce-41c6-9a48-914289d32207","Type":"ContainerStarted","Data":"6d14990480858d2c2ec3f5a8f096baa40ce1e919985e1b2c1ca05eaa0bda5102"} Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.571553 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" event={"ID":"2308db67-1c3e-465c-8574-58fe145f34e4","Type":"ContainerStarted","Data":"1559fcb196625a09883c113d9ea275b6a70cc23361fc93b8770c6130fccbd6c9"} Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.572867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" event={"ID":"5660d99f-cacd-4602-83a8-e6e152380afc","Type":"ContainerStarted","Data":"d0a36dc6ea6d154bdbab65ef4055690bb4c22e7bb02531df989e53bec3f44406"} Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.574053 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" event={"ID":"3856ceb2-87c8-4db0-bbb8-66cf7713accc","Type":"ContainerStarted","Data":"c14b3c36e00f1f17e55ae6433f7812eae183ab05b189ea3c48f1bfde21968478"} Jan 26 18:54:28 crc kubenswrapper[4770]: I0126 18:54:28.620464 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gjmw8"] Jan 26 18:54:29 crc kubenswrapper[4770]: I0126 18:54:29.582261 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" event={"ID":"1d294f34-81c6-46f1-9fa0-5950a2a7337f","Type":"ContainerStarted","Data":"59892d3715953655d01328aa88ada1cb510120bcb4240a54f1bd4b911a3144d6"} Jan 26 18:54:29 crc kubenswrapper[4770]: I0126 18:54:29.689173 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 18:54:30 crc kubenswrapper[4770]: I0126 18:54:30.336212 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:54:30 crc kubenswrapper[4770]: I0126 18:54:30.336309 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.682447 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" event={"ID":"1d294f34-81c6-46f1-9fa0-5950a2a7337f","Type":"ContainerStarted","Data":"911905f37110d8ab797bb09c4e6e7a0e534d08082862f40a12b41b73334f260c"} Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.683063 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.684041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" event={"ID":"2d01f9de-1cce-41c6-9a48-914289d32207","Type":"ContainerStarted","Data":"1df3953da8963e74a8d25454df66e2714a46bb48cfbac290adcafe6b959b84ac"} Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.685942 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" event={"ID":"2308db67-1c3e-465c-8574-58fe145f34e4","Type":"ContainerStarted","Data":"63d4fec294f0609818003701501c3370177710e62cacdc07f9eb426550d82b36"} Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.687293 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" event={"ID":"5660d99f-cacd-4602-83a8-e6e152380afc","Type":"ContainerStarted","Data":"19b79b85a342975ede99cdea82843f08fe7677d1027b90e1a3a6b0455be0e5fa"} Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.688740 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.690001 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.690785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" event={"ID":"3856ceb2-87c8-4db0-bbb8-66cf7713accc","Type":"ContainerStarted","Data":"5961472aa4e74396d40980971cd31d024df639c7e610539751005f2ed9d2c472"} Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.716648 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" podStartSLOduration=2.820312599 podStartE2EDuration="13.716629224s" podCreationTimestamp="2026-01-26 18:54:27 +0000 UTC" firstStartedPulling="2026-01-26 18:54:28.628856915 +0000 UTC m=+753.193763647" lastFinishedPulling="2026-01-26 18:54:39.52517354 +0000 UTC m=+764.090080272" observedRunningTime="2026-01-26 18:54:40.710642711 +0000 UTC m=+765.275549463" watchObservedRunningTime="2026-01-26 18:54:40.716629224 +0000 UTC m=+765.281535956" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.734970 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js" podStartSLOduration=2.339391392 podStartE2EDuration="13.73495222s" podCreationTimestamp="2026-01-26 18:54:27 +0000 UTC" firstStartedPulling="2026-01-26 18:54:28.149950473 +0000 UTC m=+752.714857205" lastFinishedPulling="2026-01-26 18:54:39.545511301 +0000 UTC m=+764.110418033" observedRunningTime="2026-01-26 18:54:40.73088144 +0000 UTC m=+765.295788172" watchObservedRunningTime="2026-01-26 18:54:40.73495222 +0000 UTC m=+765.299858952" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.758230 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kgxzc" podStartSLOduration=2.664951351 podStartE2EDuration="13.75821101s" podCreationTimestamp="2026-01-26 18:54:27 +0000 UTC" firstStartedPulling="2026-01-26 18:54:28.511741384 +0000 UTC m=+753.076648116" lastFinishedPulling="2026-01-26 18:54:39.605001043 +0000 UTC m=+764.169907775" observedRunningTime="2026-01-26 18:54:40.75598235 +0000 UTC m=+765.320889102" watchObservedRunningTime="2026-01-26 18:54:40.75821101 +0000 UTC m=+765.323117742" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.787234 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fhb9k" podStartSLOduration=2.133900916 podStartE2EDuration="13.787208796s" podCreationTimestamp="2026-01-26 18:54:27 +0000 UTC" firstStartedPulling="2026-01-26 18:54:27.871810519 +0000 UTC m=+752.436717251" lastFinishedPulling="2026-01-26 18:54:39.525118399 +0000 UTC m=+764.090025131" observedRunningTime="2026-01-26 18:54:40.780612587 +0000 UTC m=+765.345519319" watchObservedRunningTime="2026-01-26 18:54:40.787208796 +0000 UTC m=+765.352115528" Jan 26 18:54:40 crc kubenswrapper[4770]: I0126 18:54:40.802432 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5" podStartSLOduration=2.485473279 podStartE2EDuration="13.802411057s" podCreationTimestamp="2026-01-26 18:54:27 +0000 UTC" firstStartedPulling="2026-01-26 18:54:28.207348318 +0000 UTC m=+752.772255050" lastFinishedPulling="2026-01-26 18:54:39.524286096 +0000 UTC m=+764.089192828" observedRunningTime="2026-01-26 18:54:40.798283886 +0000 UTC m=+765.363190628" watchObservedRunningTime="2026-01-26 18:54:40.802411057 +0000 UTC m=+765.367317789" Jan 26 18:54:48 crc kubenswrapper[4770]: I0126 18:54:48.115886 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-gjmw8" Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.331008 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.331855 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.331920 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.333044 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.333153 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee" gracePeriod=600 Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.813428 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee" exitCode=0 Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.813524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee"} Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.814112 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef"} Jan 26 18:55:00 crc kubenswrapper[4770]: I0126 18:55:00.814146 4770 scope.go:117] "RemoveContainer" containerID="e8d33ec21a7bce033c16a0817e158b81ce8af4caff96675d5131a56d2e6cf8d9" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.588946 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562"] Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.591391 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.593851 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.597574 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562"] Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.770844 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.770910 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.771003 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5cr\" (UniqueName: \"kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.872265 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5cr\" (UniqueName: \"kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.872354 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.872390 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.873043 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.873247 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.893980 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5cr\" (UniqueName: \"kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:08 crc kubenswrapper[4770]: I0126 18:55:08.910034 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:09 crc kubenswrapper[4770]: I0126 18:55:09.388005 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562"] Jan 26 18:55:09 crc kubenswrapper[4770]: I0126 18:55:09.868459 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerID="f318454142208d42fca803118104613ff9f31803707657e2f4db961a39066a1f" exitCode=0 Jan 26 18:55:09 crc kubenswrapper[4770]: I0126 18:55:09.868525 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" event={"ID":"b4afedef-6113-4a5f-94b0-dfe367e727f7","Type":"ContainerDied","Data":"f318454142208d42fca803118104613ff9f31803707657e2f4db961a39066a1f"} Jan 26 18:55:09 crc kubenswrapper[4770]: I0126 18:55:09.868884 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" event={"ID":"b4afedef-6113-4a5f-94b0-dfe367e727f7","Type":"ContainerStarted","Data":"2223ab6fa9364c9f04e99a2ec2a9c1048d0d168ae2bc14bb2f67eb58294e32fa"} Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.886481 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.888536 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.897788 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7x2\" (UniqueName: \"kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.897884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.898043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.907120 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.999498 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.999792 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws7x2\" (UniqueName: \"kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:10 crc kubenswrapper[4770]: I0126 18:55:10.999898 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.000057 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.000382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.021976 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws7x2\" (UniqueName: \"kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2\") pod \"redhat-operators-2rbqp\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.209239 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.420578 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.881859 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerID="b6613774d527aaa125cba56266f3a350236a0de872d5f789d5da545c0a303741" exitCode=0 Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.881920 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" event={"ID":"b4afedef-6113-4a5f-94b0-dfe367e727f7","Type":"ContainerDied","Data":"b6613774d527aaa125cba56266f3a350236a0de872d5f789d5da545c0a303741"} Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.883325 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerID="fadae1190b353aa0003827894e7c2a13994c60638c759de95df227882a77c2d9" exitCode=0 Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.883372 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerDied","Data":"fadae1190b353aa0003827894e7c2a13994c60638c759de95df227882a77c2d9"} Jan 26 18:55:11 crc kubenswrapper[4770]: I0126 18:55:11.883402 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerStarted","Data":"49f73ecdffa961403fe3ab7f4f079ae2d211d290f48d4574d52ed83b371da99d"} Jan 26 18:55:12 crc kubenswrapper[4770]: I0126 18:55:12.892539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerStarted","Data":"aeae434cd3a1dad9880146a71ce8ee1d57e60b9bcad71d956f5bf84614ac8323"} Jan 26 18:55:12 crc kubenswrapper[4770]: I0126 18:55:12.896434 4770 generic.go:334] "Generic (PLEG): container finished" podID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerID="ecf1645b8a3434704c73640c913fa32bf3ba4619cccee65078d04cabb7ebaead" exitCode=0 Jan 26 18:55:12 crc kubenswrapper[4770]: I0126 18:55:12.896474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" event={"ID":"b4afedef-6113-4a5f-94b0-dfe367e727f7","Type":"ContainerDied","Data":"ecf1645b8a3434704c73640c913fa32bf3ba4619cccee65078d04cabb7ebaead"} Jan 26 18:55:13 crc kubenswrapper[4770]: I0126 18:55:13.907137 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerID="aeae434cd3a1dad9880146a71ce8ee1d57e60b9bcad71d956f5bf84614ac8323" exitCode=0 Jan 26 18:55:13 crc kubenswrapper[4770]: I0126 18:55:13.907236 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerDied","Data":"aeae434cd3a1dad9880146a71ce8ee1d57e60b9bcad71d956f5bf84614ac8323"} Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.141303 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.342611 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle\") pod \"b4afedef-6113-4a5f-94b0-dfe367e727f7\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.342781 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util\") pod \"b4afedef-6113-4a5f-94b0-dfe367e727f7\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.342879 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5cr\" (UniqueName: \"kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr\") pod \"b4afedef-6113-4a5f-94b0-dfe367e727f7\" (UID: \"b4afedef-6113-4a5f-94b0-dfe367e727f7\") " Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.343257 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle" (OuterVolumeSpecName: "bundle") pod "b4afedef-6113-4a5f-94b0-dfe367e727f7" (UID: "b4afedef-6113-4a5f-94b0-dfe367e727f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.348930 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr" (OuterVolumeSpecName: "kube-api-access-gt5cr") pod "b4afedef-6113-4a5f-94b0-dfe367e727f7" (UID: "b4afedef-6113-4a5f-94b0-dfe367e727f7"). InnerVolumeSpecName "kube-api-access-gt5cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.359926 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util" (OuterVolumeSpecName: "util") pod "b4afedef-6113-4a5f-94b0-dfe367e727f7" (UID: "b4afedef-6113-4a5f-94b0-dfe367e727f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.443992 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5cr\" (UniqueName: \"kubernetes.io/projected/b4afedef-6113-4a5f-94b0-dfe367e727f7-kube-api-access-gt5cr\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.444138 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.444219 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4afedef-6113-4a5f-94b0-dfe367e727f7-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.916683 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerStarted","Data":"10daab1d8fac251eeb01091619019cc6bd09909e75e0f39f212b1886fc2bd748"} Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.919623 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" event={"ID":"b4afedef-6113-4a5f-94b0-dfe367e727f7","Type":"ContainerDied","Data":"2223ab6fa9364c9f04e99a2ec2a9c1048d0d168ae2bc14bb2f67eb58294e32fa"} Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.919674 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2223ab6fa9364c9f04e99a2ec2a9c1048d0d168ae2bc14bb2f67eb58294e32fa" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.919755 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562" Jan 26 18:55:14 crc kubenswrapper[4770]: I0126 18:55:14.941659 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rbqp" podStartSLOduration=2.507467482 podStartE2EDuration="4.941640897s" podCreationTimestamp="2026-01-26 18:55:10 +0000 UTC" firstStartedPulling="2026-01-26 18:55:11.884417215 +0000 UTC m=+796.449323967" lastFinishedPulling="2026-01-26 18:55:14.31859065 +0000 UTC m=+798.883497382" observedRunningTime="2026-01-26 18:55:14.941033291 +0000 UTC m=+799.505940083" watchObservedRunningTime="2026-01-26 18:55:14.941640897 +0000 UTC m=+799.506547629" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.356892 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5p9rk"] Jan 26 18:55:18 crc kubenswrapper[4770]: E0126 18:55:18.357630 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="pull" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.357646 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="pull" Jan 26 18:55:18 crc kubenswrapper[4770]: E0126 18:55:18.357660 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="extract" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.357668 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="extract" Jan 26 18:55:18 crc kubenswrapper[4770]: E0126 18:55:18.357690 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="util" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.357719 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="util" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.357853 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4afedef-6113-4a5f-94b0-dfe367e727f7" containerName="extract" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.358360 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.361311 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.361423 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fvmjg" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.361442 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.368470 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5p9rk"] Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.493417 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n76s\" (UniqueName: \"kubernetes.io/projected/c9d57646-d6ef-42b3-8d4e-445486b6e18d-kube-api-access-7n76s\") pod \"nmstate-operator-646758c888-5p9rk\" (UID: \"c9d57646-d6ef-42b3-8d4e-445486b6e18d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.594573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n76s\" (UniqueName: \"kubernetes.io/projected/c9d57646-d6ef-42b3-8d4e-445486b6e18d-kube-api-access-7n76s\") pod \"nmstate-operator-646758c888-5p9rk\" (UID: \"c9d57646-d6ef-42b3-8d4e-445486b6e18d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.612815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n76s\" (UniqueName: \"kubernetes.io/projected/c9d57646-d6ef-42b3-8d4e-445486b6e18d-kube-api-access-7n76s\") pod \"nmstate-operator-646758c888-5p9rk\" (UID: \"c9d57646-d6ef-42b3-8d4e-445486b6e18d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" Jan 26 18:55:18 crc kubenswrapper[4770]: I0126 18:55:18.726393 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" Jan 26 18:55:19 crc kubenswrapper[4770]: I0126 18:55:19.138063 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5p9rk"] Jan 26 18:55:19 crc kubenswrapper[4770]: I0126 18:55:19.947718 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" event={"ID":"c9d57646-d6ef-42b3-8d4e-445486b6e18d","Type":"ContainerStarted","Data":"b49d04257af79bad750b7db5b256ca90fa5b52ba036f5fd9c61ca2efab9641e6"} Jan 26 18:55:21 crc kubenswrapper[4770]: I0126 18:55:21.209894 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:21 crc kubenswrapper[4770]: I0126 18:55:21.210228 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:21 crc kubenswrapper[4770]: I0126 18:55:21.264007 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:21 crc kubenswrapper[4770]: I0126 18:55:21.998948 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:22 crc kubenswrapper[4770]: I0126 18:55:22.972773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" event={"ID":"c9d57646-d6ef-42b3-8d4e-445486b6e18d","Type":"ContainerStarted","Data":"e3e00cacd4eac2e9cce95a6100475147a3a9962615e8d275d2f57b4bde1486fe"} Jan 26 18:55:22 crc kubenswrapper[4770]: I0126 18:55:22.994325 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-5p9rk" podStartSLOduration=1.704116862 podStartE2EDuration="4.994300634s" podCreationTimestamp="2026-01-26 18:55:18 +0000 UTC" firstStartedPulling="2026-01-26 18:55:19.129596879 +0000 UTC m=+803.694503611" lastFinishedPulling="2026-01-26 18:55:22.419780631 +0000 UTC m=+806.984687383" observedRunningTime="2026-01-26 18:55:22.990198873 +0000 UTC m=+807.555105605" watchObservedRunningTime="2026-01-26 18:55:22.994300634 +0000 UTC m=+807.559207366" Jan 26 18:55:23 crc kubenswrapper[4770]: I0126 18:55:23.864954 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:23 crc kubenswrapper[4770]: I0126 18:55:23.971481 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wx7dh"] Jan 26 18:55:23 crc kubenswrapper[4770]: I0126 18:55:23.972363 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" Jan 26 18:55:23 crc kubenswrapper[4770]: I0126 18:55:23.977379 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rzsht" Jan 26 18:55:23 crc kubenswrapper[4770]: I0126 18:55:23.978236 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rbqp" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="registry-server" containerID="cri-o://10daab1d8fac251eeb01091619019cc6bd09909e75e0f39f212b1886fc2bd748" gracePeriod=2 Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.007223 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wx7dh"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.042759 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.043773 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.044969 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-224k9"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.045869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.052979 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.062263 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163335 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qddj\" (UniqueName: \"kubernetes.io/projected/0f6003df-fc85-4c3a-ad98-822f6e7d670d-kube-api-access-4qddj\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f6003df-fc85-4c3a-ad98-822f6e7d670d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-ovs-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-dbus-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njdck\" (UniqueName: \"kubernetes.io/projected/47ba2b14-2e10-43cd-9b79-1c9350662bc0-kube-api-access-njdck\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163528 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-nmstate-lock\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.163543 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqvr\" (UniqueName: \"kubernetes.io/projected/a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7-kube-api-access-ppqvr\") pod \"nmstate-metrics-54757c584b-wx7dh\" (UID: \"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.221979 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.223318 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.225644 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.226431 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gxtg7" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.227867 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.233269 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264691 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f6003df-fc85-4c3a-ad98-822f6e7d670d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264754 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-ovs-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-dbus-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264808 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njdck\" (UniqueName: \"kubernetes.io/projected/47ba2b14-2e10-43cd-9b79-1c9350662bc0-kube-api-access-njdck\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264828 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-nmstate-lock\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqvr\" (UniqueName: \"kubernetes.io/projected/a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7-kube-api-access-ppqvr\") pod \"nmstate-metrics-54757c584b-wx7dh\" (UID: \"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.264895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qddj\" (UniqueName: \"kubernetes.io/projected/0f6003df-fc85-4c3a-ad98-822f6e7d670d-kube-api-access-4qddj\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.265164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-ovs-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.265314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-nmstate-lock\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.265351 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/47ba2b14-2e10-43cd-9b79-1c9350662bc0-dbus-socket\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.287319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0f6003df-fc85-4c3a-ad98-822f6e7d670d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.287433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqvr\" (UniqueName: \"kubernetes.io/projected/a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7-kube-api-access-ppqvr\") pod \"nmstate-metrics-54757c584b-wx7dh\" (UID: \"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.291248 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.291579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njdck\" (UniqueName: \"kubernetes.io/projected/47ba2b14-2e10-43cd-9b79-1c9350662bc0-kube-api-access-njdck\") pod \"nmstate-handler-224k9\" (UID: \"47ba2b14-2e10-43cd-9b79-1c9350662bc0\") " pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.292511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qddj\" (UniqueName: \"kubernetes.io/projected/0f6003df-fc85-4c3a-ad98-822f6e7d670d-kube-api-access-4qddj\") pod \"nmstate-webhook-8474b5b9d8-qplrm\" (UID: \"0f6003df-fc85-4c3a-ad98-822f6e7d670d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.366216 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e872b8d-441d-4fe7-abe1-12d880b17f99-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.366274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e872b8d-441d-4fe7-abe1-12d880b17f99-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.366327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjpj\" (UniqueName: \"kubernetes.io/projected/4e872b8d-441d-4fe7-abe1-12d880b17f99-kube-api-access-hvjpj\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.372882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.381379 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.418041 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f6f866d9d-86dfw"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.423917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.437466 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6f866d9d-86dfw"] Jan 26 18:55:24 crc kubenswrapper[4770]: W0126 18:55:24.457573 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ba2b14_2e10_43cd_9b79_1c9350662bc0.slice/crio-2ea935ce03bacfcc42b0dcdec0e0b4b69abc6587922eaf32dd5f03c12414b9ab WatchSource:0}: Error finding container 2ea935ce03bacfcc42b0dcdec0e0b4b69abc6587922eaf32dd5f03c12414b9ab: Status 404 returned error can't find the container with id 2ea935ce03bacfcc42b0dcdec0e0b4b69abc6587922eaf32dd5f03c12414b9ab Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e872b8d-441d-4fe7-abe1-12d880b17f99-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469193 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e872b8d-441d-4fe7-abe1-12d880b17f99-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469221 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-oauth-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469248 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469275 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjpj\" (UniqueName: \"kubernetes.io/projected/4e872b8d-441d-4fe7-abe1-12d880b17f99-kube-api-access-hvjpj\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469293 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctj7\" (UniqueName: \"kubernetes.io/projected/51483a99-82ef-40d0-b787-f6a1d70f2429-kube-api-access-jctj7\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469329 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-service-ca\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469350 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-console-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-trusted-ca-bundle\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.469384 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-oauth-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.470531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4e872b8d-441d-4fe7-abe1-12d880b17f99-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.474000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e872b8d-441d-4fe7-abe1-12d880b17f99-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.487060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjpj\" (UniqueName: \"kubernetes.io/projected/4e872b8d-441d-4fe7-abe1-12d880b17f99-kube-api-access-hvjpj\") pod \"nmstate-console-plugin-7754f76f8b-qzgv8\" (UID: \"4e872b8d-441d-4fe7-abe1-12d880b17f99\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.564454 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570299 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctj7\" (UniqueName: \"kubernetes.io/projected/51483a99-82ef-40d0-b787-f6a1d70f2429-kube-api-access-jctj7\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-service-ca\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-console-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570420 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-trusted-ca-bundle\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-oauth-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570485 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-oauth-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.570510 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.571171 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-service-ca\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.571447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-oauth-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.572235 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-trusted-ca-bundle\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.572940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51483a99-82ef-40d0-b787-f6a1d70f2429-console-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.573917 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-serving-cert\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.575254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51483a99-82ef-40d0-b787-f6a1d70f2429-console-oauth-config\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.590130 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.592100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctj7\" (UniqueName: \"kubernetes.io/projected/51483a99-82ef-40d0-b787-f6a1d70f2429-kube-api-access-jctj7\") pod \"console-7f6f866d9d-86dfw\" (UID: \"51483a99-82ef-40d0-b787-f6a1d70f2429\") " pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.741633 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-wx7dh"] Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.744980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:24 crc kubenswrapper[4770]: I0126 18:55:24.751583 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8"] Jan 26 18:55:24 crc kubenswrapper[4770]: W0126 18:55:24.753321 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b5a4c0_1a8b_44c7_a2fe_86b4a08628d7.slice/crio-5756ed639e539ea97685a2ccd1f319469a7bddff4dbdacd9c353e2d44b914f5f WatchSource:0}: Error finding container 5756ed639e539ea97685a2ccd1f319469a7bddff4dbdacd9c353e2d44b914f5f: Status 404 returned error can't find the container with id 5756ed639e539ea97685a2ccd1f319469a7bddff4dbdacd9c353e2d44b914f5f Jan 26 18:55:24 crc kubenswrapper[4770]: W0126 18:55:24.764192 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e872b8d_441d_4fe7_abe1_12d880b17f99.slice/crio-0686b10306b9090f0f3aa7f340a54f30a5e0fbd7ef26c05e2b8a296e730ecb0e WatchSource:0}: Error finding container 0686b10306b9090f0f3aa7f340a54f30a5e0fbd7ef26c05e2b8a296e730ecb0e: Status 404 returned error can't find the container with id 0686b10306b9090f0f3aa7f340a54f30a5e0fbd7ef26c05e2b8a296e730ecb0e Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.924419 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6f866d9d-86dfw"] Jan 26 18:55:25 crc kubenswrapper[4770]: W0126 18:55:24.927910 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51483a99_82ef_40d0_b787_f6a1d70f2429.slice/crio-4091b706010ae92d3bda4b1c2b3c69ef336b535706c26e7258ac69c4b560301a WatchSource:0}: Error finding container 4091b706010ae92d3bda4b1c2b3c69ef336b535706c26e7258ac69c4b560301a: Status 404 returned error can't find the container with id 4091b706010ae92d3bda4b1c2b3c69ef336b535706c26e7258ac69c4b560301a Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.993337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" event={"ID":"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7","Type":"ContainerStarted","Data":"5756ed639e539ea97685a2ccd1f319469a7bddff4dbdacd9c353e2d44b914f5f"} Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.994490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" event={"ID":"4e872b8d-441d-4fe7-abe1-12d880b17f99","Type":"ContainerStarted","Data":"0686b10306b9090f0f3aa7f340a54f30a5e0fbd7ef26c05e2b8a296e730ecb0e"} Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.996187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" event={"ID":"0f6003df-fc85-4c3a-ad98-822f6e7d670d","Type":"ContainerStarted","Data":"e61bac976201c7b90cf605303ffef3a060a30b81723451eb3bef6e4bf6ba3943"} Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.997527 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-224k9" event={"ID":"47ba2b14-2e10-43cd-9b79-1c9350662bc0","Type":"ContainerStarted","Data":"2ea935ce03bacfcc42b0dcdec0e0b4b69abc6587922eaf32dd5f03c12414b9ab"} Jan 26 18:55:25 crc kubenswrapper[4770]: I0126 18:55:24.998943 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6f866d9d-86dfw" event={"ID":"51483a99-82ef-40d0-b787-f6a1d70f2429","Type":"ContainerStarted","Data":"4091b706010ae92d3bda4b1c2b3c69ef336b535706c26e7258ac69c4b560301a"} Jan 26 18:55:27 crc kubenswrapper[4770]: I0126 18:55:27.015826 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerID="10daab1d8fac251eeb01091619019cc6bd09909e75e0f39f212b1886fc2bd748" exitCode=0 Jan 26 18:55:27 crc kubenswrapper[4770]: I0126 18:55:27.016013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerDied","Data":"10daab1d8fac251eeb01091619019cc6bd09909e75e0f39f212b1886fc2bd748"} Jan 26 18:55:27 crc kubenswrapper[4770]: I0126 18:55:27.017934 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6f866d9d-86dfw" event={"ID":"51483a99-82ef-40d0-b787-f6a1d70f2429","Type":"ContainerStarted","Data":"dd4b1c9323f3a4e742cc5ecc5bc457cc463017a658e5c1669783bc20604cb593"} Jan 26 18:55:27 crc kubenswrapper[4770]: I0126 18:55:27.036159 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f6f866d9d-86dfw" podStartSLOduration=3.036144957 podStartE2EDuration="3.036144957s" podCreationTimestamp="2026-01-26 18:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:55:27.031981564 +0000 UTC m=+811.596888296" watchObservedRunningTime="2026-01-26 18:55:27.036144957 +0000 UTC m=+811.601051689" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.028325 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rbqp" event={"ID":"2f11afb4-c84b-4ee2-869b-7e4b25fa2304","Type":"ContainerDied","Data":"49f73ecdffa961403fe3ab7f4f079ae2d211d290f48d4574d52ed83b371da99d"} Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.028759 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f73ecdffa961403fe3ab7f4f079ae2d211d290f48d4574d52ed83b371da99d" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.056280 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.216271 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content\") pod \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.216638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws7x2\" (UniqueName: \"kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2\") pod \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.216667 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities\") pod \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\" (UID: \"2f11afb4-c84b-4ee2-869b-7e4b25fa2304\") " Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.218504 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities" (OuterVolumeSpecName: "utilities") pod "2f11afb4-c84b-4ee2-869b-7e4b25fa2304" (UID: "2f11afb4-c84b-4ee2-869b-7e4b25fa2304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.240025 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2" (OuterVolumeSpecName: "kube-api-access-ws7x2") pod "2f11afb4-c84b-4ee2-869b-7e4b25fa2304" (UID: "2f11afb4-c84b-4ee2-869b-7e4b25fa2304"). InnerVolumeSpecName "kube-api-access-ws7x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.318096 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws7x2\" (UniqueName: \"kubernetes.io/projected/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-kube-api-access-ws7x2\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.318133 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.362986 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f11afb4-c84b-4ee2-869b-7e4b25fa2304" (UID: "2f11afb4-c84b-4ee2-869b-7e4b25fa2304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:55:28 crc kubenswrapper[4770]: I0126 18:55:28.418536 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f11afb4-c84b-4ee2-869b-7e4b25fa2304-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.036607 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" event={"ID":"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7","Type":"ContainerStarted","Data":"1c1b360fdec46e7b055de03eb80f8c0ae727c60c24a6b6fc0be8bd6ef3cf02f1"} Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.037884 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" event={"ID":"4e872b8d-441d-4fe7-abe1-12d880b17f99","Type":"ContainerStarted","Data":"9c27109e9c539f07f5e1ab901766010c17105c29e34538eaa6bca07bd3cec017"} Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.039492 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" event={"ID":"0f6003df-fc85-4c3a-ad98-822f6e7d670d","Type":"ContainerStarted","Data":"61429c755de998100bcad4df72c31d8a4bff4a3060f70acebe17287f8b984af5"} Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.039821 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.041054 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rbqp" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.041158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-224k9" event={"ID":"47ba2b14-2e10-43cd-9b79-1c9350662bc0","Type":"ContainerStarted","Data":"b98ce5d4213d8bcf5b64cee7931bc4a25e8609ffb8aa543371c229d2f94b5581"} Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.041821 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.056625 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-qzgv8" podStartSLOduration=1.779095857 podStartE2EDuration="5.056606717s" podCreationTimestamp="2026-01-26 18:55:24 +0000 UTC" firstStartedPulling="2026-01-26 18:55:24.769596223 +0000 UTC m=+809.334502955" lastFinishedPulling="2026-01-26 18:55:28.047107073 +0000 UTC m=+812.612013815" observedRunningTime="2026-01-26 18:55:29.050878651 +0000 UTC m=+813.615785383" watchObservedRunningTime="2026-01-26 18:55:29.056606717 +0000 UTC m=+813.621513449" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.069490 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" podStartSLOduration=2.624043678 podStartE2EDuration="6.069474746s" podCreationTimestamp="2026-01-26 18:55:23 +0000 UTC" firstStartedPulling="2026-01-26 18:55:24.604927742 +0000 UTC m=+809.169834474" lastFinishedPulling="2026-01-26 18:55:28.05035881 +0000 UTC m=+812.615265542" observedRunningTime="2026-01-26 18:55:29.068713765 +0000 UTC m=+813.633620497" watchObservedRunningTime="2026-01-26 18:55:29.069474746 +0000 UTC m=+813.634381478" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.090675 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-224k9" podStartSLOduration=2.502669248 podStartE2EDuration="6.090657099s" podCreationTimestamp="2026-01-26 18:55:23 +0000 UTC" firstStartedPulling="2026-01-26 18:55:24.460978452 +0000 UTC m=+809.025885184" lastFinishedPulling="2026-01-26 18:55:28.048966303 +0000 UTC m=+812.613873035" observedRunningTime="2026-01-26 18:55:29.09031255 +0000 UTC m=+813.655219282" watchObservedRunningTime="2026-01-26 18:55:29.090657099 +0000 UTC m=+813.655563831" Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.107219 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.111643 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rbqp"] Jan 26 18:55:29 crc kubenswrapper[4770]: I0126 18:55:29.775821 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" path="/var/lib/kubelet/pods/2f11afb4-c84b-4ee2-869b-7e4b25fa2304/volumes" Jan 26 18:55:31 crc kubenswrapper[4770]: I0126 18:55:31.077213 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" event={"ID":"a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7","Type":"ContainerStarted","Data":"0a245be99ea3014f8a1552bc5899c25ee7381df6e8debc6af72c77541f9c8a70"} Jan 26 18:55:31 crc kubenswrapper[4770]: I0126 18:55:31.114560 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-wx7dh" podStartSLOduration=2.517767387 podStartE2EDuration="8.114533321s" podCreationTimestamp="2026-01-26 18:55:23 +0000 UTC" firstStartedPulling="2026-01-26 18:55:24.759495878 +0000 UTC m=+809.324402610" lastFinishedPulling="2026-01-26 18:55:30.356261812 +0000 UTC m=+814.921168544" observedRunningTime="2026-01-26 18:55:31.103938645 +0000 UTC m=+815.668845467" watchObservedRunningTime="2026-01-26 18:55:31.114533321 +0000 UTC m=+815.679440063" Jan 26 18:55:34 crc kubenswrapper[4770]: I0126 18:55:34.409630 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-224k9" Jan 26 18:55:34 crc kubenswrapper[4770]: I0126 18:55:34.745348 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:34 crc kubenswrapper[4770]: I0126 18:55:34.745681 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:34 crc kubenswrapper[4770]: I0126 18:55:34.751524 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:35 crc kubenswrapper[4770]: I0126 18:55:35.114122 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f6f866d9d-86dfw" Jan 26 18:55:35 crc kubenswrapper[4770]: I0126 18:55:35.159716 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:55:44 crc kubenswrapper[4770]: I0126 18:55:44.381248 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qplrm" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.198014 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5qzkc" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" containerID="cri-o://4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382" gracePeriod=15 Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.647387 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5qzkc_d6fd3922-5ed0-4e60-9db5-94eb263b410b/console/0.log" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.647824 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.757028 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.757111 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrr59\" (UniqueName: \"kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.757130 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.757169 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.757785 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.758037 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.758837 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.758931 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759006 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config\") pod \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\" (UID: \"d6fd3922-5ed0-4e60-9db5-94eb263b410b\") " Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca" (OuterVolumeSpecName: "service-ca") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759482 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config" (OuterVolumeSpecName: "console-config") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759732 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759756 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759770 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.759781 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.763375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.763636 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59" (OuterVolumeSpecName: "kube-api-access-vrr59") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "kube-api-access-vrr59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.763668 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d6fd3922-5ed0-4e60-9db5-94eb263b410b" (UID: "d6fd3922-5ed0-4e60-9db5-94eb263b410b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.861050 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrr59\" (UniqueName: \"kubernetes.io/projected/d6fd3922-5ed0-4e60-9db5-94eb263b410b-kube-api-access-vrr59\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.861085 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:00 crc kubenswrapper[4770]: I0126 18:56:00.861096 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d6fd3922-5ed0-4e60-9db5-94eb263b410b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227218 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6"] Jan 26 18:56:01 crc kubenswrapper[4770]: E0126 18:56:01.227621 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="extract-content" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227639 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="extract-content" Jan 26 18:56:01 crc kubenswrapper[4770]: E0126 18:56:01.227661 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="extract-utilities" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227669 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="extract-utilities" Jan 26 18:56:01 crc kubenswrapper[4770]: E0126 18:56:01.227684 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="registry-server" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227692 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="registry-server" Jan 26 18:56:01 crc kubenswrapper[4770]: E0126 18:56:01.227706 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227714 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227871 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerName="console" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.227888 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f11afb4-c84b-4ee2-869b-7e4b25fa2304" containerName="registry-server" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.230863 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.233326 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.236181 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6"] Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.264838 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.264931 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.265208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxjl\" (UniqueName: \"kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291287 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5qzkc_d6fd3922-5ed0-4e60-9db5-94eb263b410b/console/0.log" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291349 4770 generic.go:334] "Generic (PLEG): container finished" podID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" containerID="4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382" exitCode=2 Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qzkc" event={"ID":"d6fd3922-5ed0-4e60-9db5-94eb263b410b","Type":"ContainerDied","Data":"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382"} Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291425 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5qzkc" event={"ID":"d6fd3922-5ed0-4e60-9db5-94eb263b410b","Type":"ContainerDied","Data":"e57583efc2aa7fba7871aec5b64d90d294fdb82da8595802a1c8868ec358b7e2"} Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291447 4770 scope.go:117] "RemoveContainer" containerID="4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.291469 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5qzkc" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.308547 4770 scope.go:117] "RemoveContainer" containerID="4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382" Jan 26 18:56:01 crc kubenswrapper[4770]: E0126 18:56:01.310336 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382\": container with ID starting with 4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382 not found: ID does not exist" containerID="4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.310384 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382"} err="failed to get container status \"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382\": rpc error: code = NotFound desc = could not find container \"4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382\": container with ID starting with 4601e70d666e059f5344161431c113304293ec53a21c62872f10391f961fe382 not found: ID does not exist" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.321200 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.333180 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5qzkc"] Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.366678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.367056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.367213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.367241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxjl\" (UniqueName: \"kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.367858 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.387543 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxjl\" (UniqueName: \"kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.552571 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.780270 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6fd3922-5ed0-4e60-9db5-94eb263b410b" path="/var/lib/kubelet/pods/d6fd3922-5ed0-4e60-9db5-94eb263b410b/volumes" Jan 26 18:56:01 crc kubenswrapper[4770]: I0126 18:56:01.798188 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6"] Jan 26 18:56:02 crc kubenswrapper[4770]: I0126 18:56:02.298316 4770 generic.go:334] "Generic (PLEG): container finished" podID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerID="492e37f2e0af3ea929362b8447290f660af628d4d46aaca9dfecd5808f1d1869" exitCode=0 Jan 26 18:56:02 crc kubenswrapper[4770]: I0126 18:56:02.298987 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" event={"ID":"43ed5d27-f852-4f01-bf7c-4af96368557e","Type":"ContainerDied","Data":"492e37f2e0af3ea929362b8447290f660af628d4d46aaca9dfecd5808f1d1869"} Jan 26 18:56:02 crc kubenswrapper[4770]: I0126 18:56:02.299077 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" event={"ID":"43ed5d27-f852-4f01-bf7c-4af96368557e","Type":"ContainerStarted","Data":"ccebbab8ee2d85fc60006bfc504bfe440079a0dfec6536af28c923e4c5594b19"} Jan 26 18:56:05 crc kubenswrapper[4770]: I0126 18:56:05.322203 4770 generic.go:334] "Generic (PLEG): container finished" podID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerID="ddaa3382dada3efb3ad6ba8b5dd3e5e9df9e32f101bd7eab2ce956747e7431fe" exitCode=0 Jan 26 18:56:05 crc kubenswrapper[4770]: I0126 18:56:05.322293 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" event={"ID":"43ed5d27-f852-4f01-bf7c-4af96368557e","Type":"ContainerDied","Data":"ddaa3382dada3efb3ad6ba8b5dd3e5e9df9e32f101bd7eab2ce956747e7431fe"} Jan 26 18:56:06 crc kubenswrapper[4770]: I0126 18:56:06.333212 4770 generic.go:334] "Generic (PLEG): container finished" podID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerID="85a2391438cfaa8181aedb1ba4c1436cabb5d01dcb7065ff09bdab086d033df9" exitCode=0 Jan 26 18:56:06 crc kubenswrapper[4770]: I0126 18:56:06.333280 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" event={"ID":"43ed5d27-f852-4f01-bf7c-4af96368557e","Type":"ContainerDied","Data":"85a2391438cfaa8181aedb1ba4c1436cabb5d01dcb7065ff09bdab086d033df9"} Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.569802 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.574113 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.594038 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.656021 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.740441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkxw\" (UniqueName: \"kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.740562 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.740591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841391 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxjl\" (UniqueName: \"kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl\") pod \"43ed5d27-f852-4f01-bf7c-4af96368557e\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle\") pod \"43ed5d27-f852-4f01-bf7c-4af96368557e\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841616 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util\") pod \"43ed5d27-f852-4f01-bf7c-4af96368557e\" (UID: \"43ed5d27-f852-4f01-bf7c-4af96368557e\") " Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841806 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.841978 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkxw\" (UniqueName: \"kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.842417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.842526 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.842866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle" (OuterVolumeSpecName: "bundle") pod "43ed5d27-f852-4f01-bf7c-4af96368557e" (UID: "43ed5d27-f852-4f01-bf7c-4af96368557e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.849869 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl" (OuterVolumeSpecName: "kube-api-access-fsxjl") pod "43ed5d27-f852-4f01-bf7c-4af96368557e" (UID: "43ed5d27-f852-4f01-bf7c-4af96368557e"). InnerVolumeSpecName "kube-api-access-fsxjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.855997 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util" (OuterVolumeSpecName: "util") pod "43ed5d27-f852-4f01-bf7c-4af96368557e" (UID: "43ed5d27-f852-4f01-bf7c-4af96368557e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.862301 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkxw\" (UniqueName: \"kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw\") pod \"redhat-marketplace-lgrvv\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.900649 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.948919 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.948966 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxjl\" (UniqueName: \"kubernetes.io/projected/43ed5d27-f852-4f01-bf7c-4af96368557e-kube-api-access-fsxjl\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:07 crc kubenswrapper[4770]: I0126 18:56:07.948984 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/43ed5d27-f852-4f01-bf7c-4af96368557e-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:08 crc kubenswrapper[4770]: I0126 18:56:08.347828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" event={"ID":"43ed5d27-f852-4f01-bf7c-4af96368557e","Type":"ContainerDied","Data":"ccebbab8ee2d85fc60006bfc504bfe440079a0dfec6536af28c923e4c5594b19"} Jan 26 18:56:08 crc kubenswrapper[4770]: I0126 18:56:08.348226 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccebbab8ee2d85fc60006bfc504bfe440079a0dfec6536af28c923e4c5594b19" Jan 26 18:56:08 crc kubenswrapper[4770]: I0126 18:56:08.347938 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6" Jan 26 18:56:08 crc kubenswrapper[4770]: I0126 18:56:08.398155 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:08 crc kubenswrapper[4770]: W0126 18:56:08.403230 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1924ab31_be1b_4e7d_8070_56ce675932af.slice/crio-3c28f0347d1ce7b598d68a98b304d9aaeb0886198a74e34126dc47e3c5b76bb2 WatchSource:0}: Error finding container 3c28f0347d1ce7b598d68a98b304d9aaeb0886198a74e34126dc47e3c5b76bb2: Status 404 returned error can't find the container with id 3c28f0347d1ce7b598d68a98b304d9aaeb0886198a74e34126dc47e3c5b76bb2 Jan 26 18:56:09 crc kubenswrapper[4770]: I0126 18:56:09.354196 4770 generic.go:334] "Generic (PLEG): container finished" podID="1924ab31-be1b-4e7d-8070-56ce675932af" containerID="817f7d7207f98d917ce7ea113d01e28301645a9321fd210d8259a67caf1c781e" exitCode=0 Jan 26 18:56:09 crc kubenswrapper[4770]: I0126 18:56:09.354461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerDied","Data":"817f7d7207f98d917ce7ea113d01e28301645a9321fd210d8259a67caf1c781e"} Jan 26 18:56:09 crc kubenswrapper[4770]: I0126 18:56:09.354485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerStarted","Data":"3c28f0347d1ce7b598d68a98b304d9aaeb0886198a74e34126dc47e3c5b76bb2"} Jan 26 18:56:11 crc kubenswrapper[4770]: I0126 18:56:11.371291 4770 generic.go:334] "Generic (PLEG): container finished" podID="1924ab31-be1b-4e7d-8070-56ce675932af" containerID="fa9d7a98288e07b62b2f1fb60c5b64604dc4b67343e1e28b160d3ba56c96a93c" exitCode=0 Jan 26 18:56:11 crc kubenswrapper[4770]: I0126 18:56:11.371381 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerDied","Data":"fa9d7a98288e07b62b2f1fb60c5b64604dc4b67343e1e28b160d3ba56c96a93c"} Jan 26 18:56:12 crc kubenswrapper[4770]: I0126 18:56:12.381396 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerStarted","Data":"f623b4b3b6020b0880b1f8cb792d8360c06912c56da92b2d1c20416d4057bb85"} Jan 26 18:56:12 crc kubenswrapper[4770]: I0126 18:56:12.404032 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lgrvv" podStartSLOduration=3.022763288 podStartE2EDuration="5.404013882s" podCreationTimestamp="2026-01-26 18:56:07 +0000 UTC" firstStartedPulling="2026-01-26 18:56:09.355881261 +0000 UTC m=+853.920787993" lastFinishedPulling="2026-01-26 18:56:11.737131845 +0000 UTC m=+856.302038587" observedRunningTime="2026-01-26 18:56:12.401664748 +0000 UTC m=+856.966571490" watchObservedRunningTime="2026-01-26 18:56:12.404013882 +0000 UTC m=+856.968920604" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.813920 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr"] Jan 26 18:56:16 crc kubenswrapper[4770]: E0126 18:56:16.814389 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="pull" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.814400 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="pull" Jan 26 18:56:16 crc kubenswrapper[4770]: E0126 18:56:16.814415 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="extract" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.814421 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="extract" Jan 26 18:56:16 crc kubenswrapper[4770]: E0126 18:56:16.814432 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="util" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.814440 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="util" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.814529 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ed5d27-f852-4f01-bf7c-4af96368557e" containerName="extract" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.814940 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.816556 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.816587 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.816594 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.817371 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.817482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ztw8g" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.834867 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr"] Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.975110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsrx\" (UniqueName: \"kubernetes.io/projected/ee88a890-d295-4129-8baf-ade3a43b3758-kube-api-access-mpsrx\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.975175 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-apiservice-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:16 crc kubenswrapper[4770]: I0126 18:56:16.975341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-webhook-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.076559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsrx\" (UniqueName: \"kubernetes.io/projected/ee88a890-d295-4129-8baf-ade3a43b3758-kube-api-access-mpsrx\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.076638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-apiservice-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.076713 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-webhook-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.087569 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-webhook-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.088176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ee88a890-d295-4129-8baf-ade3a43b3758-apiservice-cert\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.100182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsrx\" (UniqueName: \"kubernetes.io/projected/ee88a890-d295-4129-8baf-ade3a43b3758-kube-api-access-mpsrx\") pod \"metallb-operator-controller-manager-859d6f9486-gtpqr\" (UID: \"ee88a890-d295-4129-8baf-ade3a43b3758\") " pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.129591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.157119 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln"] Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.159526 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.167415 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.167545 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xcvwd" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.167647 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.172520 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln"] Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.282404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-webhook-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.282453 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsvk\" (UniqueName: \"kubernetes.io/projected/b7b62592-2dab-442b-a5ef-a02562b7ed0c-kube-api-access-rqsvk\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.282486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-apiservice-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.384451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-webhook-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.385451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqsvk\" (UniqueName: \"kubernetes.io/projected/b7b62592-2dab-442b-a5ef-a02562b7ed0c-kube-api-access-rqsvk\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.385484 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-apiservice-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.389117 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-webhook-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.402537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7b62592-2dab-442b-a5ef-a02562b7ed0c-apiservice-cert\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.419870 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqsvk\" (UniqueName: \"kubernetes.io/projected/b7b62592-2dab-442b-a5ef-a02562b7ed0c-kube-api-access-rqsvk\") pod \"metallb-operator-webhook-server-85d868fd8c-rclln\" (UID: \"b7b62592-2dab-442b-a5ef-a02562b7ed0c\") " pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.512878 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.599722 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr"] Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.740224 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln"] Jan 26 18:56:17 crc kubenswrapper[4770]: W0126 18:56:17.744193 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b62592_2dab_442b_a5ef_a02562b7ed0c.slice/crio-a69cbd9a6a20b6432501adb4182f507ceaa0b66b6a1eeb6b81b755cb511ea654 WatchSource:0}: Error finding container a69cbd9a6a20b6432501adb4182f507ceaa0b66b6a1eeb6b81b755cb511ea654: Status 404 returned error can't find the container with id a69cbd9a6a20b6432501adb4182f507ceaa0b66b6a1eeb6b81b755cb511ea654 Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.901743 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.901785 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:17 crc kubenswrapper[4770]: I0126 18:56:17.978125 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:18 crc kubenswrapper[4770]: I0126 18:56:18.433382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" event={"ID":"ee88a890-d295-4129-8baf-ade3a43b3758","Type":"ContainerStarted","Data":"26213be54202ace7ab2f95aaa1f21a4a767b24e38e27e05c523882feff099f0b"} Jan 26 18:56:18 crc kubenswrapper[4770]: I0126 18:56:18.434671 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" event={"ID":"b7b62592-2dab-442b-a5ef-a02562b7ed0c","Type":"ContainerStarted","Data":"a69cbd9a6a20b6432501adb4182f507ceaa0b66b6a1eeb6b81b755cb511ea654"} Jan 26 18:56:18 crc kubenswrapper[4770]: I0126 18:56:18.477869 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.563625 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.565425 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.583234 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.717592 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.717657 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjftd\" (UniqueName: \"kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.717686 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.821891 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.821933 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjftd\" (UniqueName: \"kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.821949 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.822294 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.822648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.849034 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjftd\" (UniqueName: \"kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd\") pod \"certified-operators-jtzgz\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:19 crc kubenswrapper[4770]: I0126 18:56:19.901446 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:20 crc kubenswrapper[4770]: I0126 18:56:20.202136 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:20 crc kubenswrapper[4770]: I0126 18:56:20.459017 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerStarted","Data":"6246820a1ec343d2f4e2f7e1d07d0d8a751065b507ad9f1a885bac07747690e1"} Jan 26 18:56:21 crc kubenswrapper[4770]: I0126 18:56:21.474250 4770 generic.go:334] "Generic (PLEG): container finished" podID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerID="f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338" exitCode=0 Jan 26 18:56:21 crc kubenswrapper[4770]: I0126 18:56:21.474301 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerDied","Data":"f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338"} Jan 26 18:56:21 crc kubenswrapper[4770]: I0126 18:56:21.558581 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:21 crc kubenswrapper[4770]: I0126 18:56:21.559628 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lgrvv" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="registry-server" containerID="cri-o://f623b4b3b6020b0880b1f8cb792d8360c06912c56da92b2d1c20416d4057bb85" gracePeriod=2 Jan 26 18:56:22 crc kubenswrapper[4770]: I0126 18:56:22.483174 4770 generic.go:334] "Generic (PLEG): container finished" podID="1924ab31-be1b-4e7d-8070-56ce675932af" containerID="f623b4b3b6020b0880b1f8cb792d8360c06912c56da92b2d1c20416d4057bb85" exitCode=0 Jan 26 18:56:22 crc kubenswrapper[4770]: I0126 18:56:22.483238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerDied","Data":"f623b4b3b6020b0880b1f8cb792d8360c06912c56da92b2d1c20416d4057bb85"} Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.883246 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.981390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content\") pod \"1924ab31-be1b-4e7d-8070-56ce675932af\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.981459 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities\") pod \"1924ab31-be1b-4e7d-8070-56ce675932af\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.981572 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkxw\" (UniqueName: \"kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw\") pod \"1924ab31-be1b-4e7d-8070-56ce675932af\" (UID: \"1924ab31-be1b-4e7d-8070-56ce675932af\") " Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.983999 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities" (OuterVolumeSpecName: "utilities") pod "1924ab31-be1b-4e7d-8070-56ce675932af" (UID: "1924ab31-be1b-4e7d-8070-56ce675932af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:23 crc kubenswrapper[4770]: I0126 18:56:23.993196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw" (OuterVolumeSpecName: "kube-api-access-gzkxw") pod "1924ab31-be1b-4e7d-8070-56ce675932af" (UID: "1924ab31-be1b-4e7d-8070-56ce675932af"). InnerVolumeSpecName "kube-api-access-gzkxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.011895 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1924ab31-be1b-4e7d-8070-56ce675932af" (UID: "1924ab31-be1b-4e7d-8070-56ce675932af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.085709 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.085749 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1924ab31-be1b-4e7d-8070-56ce675932af-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.085763 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzkxw\" (UniqueName: \"kubernetes.io/projected/1924ab31-be1b-4e7d-8070-56ce675932af-kube-api-access-gzkxw\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.502943 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" event={"ID":"ee88a890-d295-4129-8baf-ade3a43b3758","Type":"ContainerStarted","Data":"5516a173ac4989544da4bb5850522c9f38397711207e7f570c05362e4d9812f4"} Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.503286 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.504904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" event={"ID":"b7b62592-2dab-442b-a5ef-a02562b7ed0c","Type":"ContainerStarted","Data":"27df67814967c25f82ade70b6bf96641609f2d75744efb42f58abbb3c1bbbf14"} Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.505021 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.507877 4770 generic.go:334] "Generic (PLEG): container finished" podID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerID="e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20" exitCode=0 Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.507955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerDied","Data":"e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20"} Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.511072 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lgrvv" event={"ID":"1924ab31-be1b-4e7d-8070-56ce675932af","Type":"ContainerDied","Data":"3c28f0347d1ce7b598d68a98b304d9aaeb0886198a74e34126dc47e3c5b76bb2"} Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.512511 4770 scope.go:117] "RemoveContainer" containerID="f623b4b3b6020b0880b1f8cb792d8360c06912c56da92b2d1c20416d4057bb85" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.511585 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lgrvv" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.547353 4770 scope.go:117] "RemoveContainer" containerID="fa9d7a98288e07b62b2f1fb60c5b64604dc4b67343e1e28b160d3ba56c96a93c" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.558210 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" podStartSLOduration=2.297771672 podStartE2EDuration="8.558182288s" podCreationTimestamp="2026-01-26 18:56:16 +0000 UTC" firstStartedPulling="2026-01-26 18:56:17.61055538 +0000 UTC m=+862.175462132" lastFinishedPulling="2026-01-26 18:56:23.870966016 +0000 UTC m=+868.435872748" observedRunningTime="2026-01-26 18:56:24.537610856 +0000 UTC m=+869.102517598" watchObservedRunningTime="2026-01-26 18:56:24.558182288 +0000 UTC m=+869.123089020" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.571975 4770 scope.go:117] "RemoveContainer" containerID="817f7d7207f98d917ce7ea113d01e28301645a9321fd210d8259a67caf1c781e" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.582880 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" podStartSLOduration=1.4377712040000001 podStartE2EDuration="7.582859564s" podCreationTimestamp="2026-01-26 18:56:17 +0000 UTC" firstStartedPulling="2026-01-26 18:56:17.748296589 +0000 UTC m=+862.313203311" lastFinishedPulling="2026-01-26 18:56:23.893384939 +0000 UTC m=+868.458291671" observedRunningTime="2026-01-26 18:56:24.576205831 +0000 UTC m=+869.141112583" watchObservedRunningTime="2026-01-26 18:56:24.582859564 +0000 UTC m=+869.147766296" Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.596207 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:24 crc kubenswrapper[4770]: I0126 18:56:24.600372 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lgrvv"] Jan 26 18:56:25 crc kubenswrapper[4770]: I0126 18:56:25.518470 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerStarted","Data":"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128"} Jan 26 18:56:25 crc kubenswrapper[4770]: I0126 18:56:25.536465 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jtzgz" podStartSLOduration=3.795709195 podStartE2EDuration="6.536448745s" podCreationTimestamp="2026-01-26 18:56:19 +0000 UTC" firstStartedPulling="2026-01-26 18:56:22.16435386 +0000 UTC m=+866.729260612" lastFinishedPulling="2026-01-26 18:56:24.90509343 +0000 UTC m=+869.470000162" observedRunningTime="2026-01-26 18:56:25.532341033 +0000 UTC m=+870.097247765" watchObservedRunningTime="2026-01-26 18:56:25.536448745 +0000 UTC m=+870.101355477" Jan 26 18:56:25 crc kubenswrapper[4770]: I0126 18:56:25.776954 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" path="/var/lib/kubelet/pods/1924ab31-be1b-4e7d-8070-56ce675932af/volumes" Jan 26 18:56:29 crc kubenswrapper[4770]: I0126 18:56:29.902682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:29 crc kubenswrapper[4770]: I0126 18:56:29.903315 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:29 crc kubenswrapper[4770]: I0126 18:56:29.949182 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:30 crc kubenswrapper[4770]: I0126 18:56:30.603505 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.364243 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:32 crc kubenswrapper[4770]: E0126 18:56:32.364496 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="registry-server" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.364508 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="registry-server" Jan 26 18:56:32 crc kubenswrapper[4770]: E0126 18:56:32.364517 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="extract-utilities" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.364523 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="extract-utilities" Jan 26 18:56:32 crc kubenswrapper[4770]: E0126 18:56:32.364535 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="extract-content" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.364542 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="extract-content" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.364647 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1924ab31-be1b-4e7d-8070-56ce675932af" containerName="registry-server" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.365455 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.379833 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.404600 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.405167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.405327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pthvn\" (UniqueName: \"kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.505749 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.505815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.505870 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pthvn\" (UniqueName: \"kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.506859 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.507111 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.535917 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pthvn\" (UniqueName: \"kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn\") pod \"community-operators-mkd2c\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.704925 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:32 crc kubenswrapper[4770]: I0126 18:56:32.985744 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:32 crc kubenswrapper[4770]: W0126 18:56:32.988442 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb559b208_2c07_4d61_a025_5cab6213f5bf.slice/crio-eccfaad3e8e5e166833a1595ebf3421708fd464347d8be3eb376ae2b7b2adb93 WatchSource:0}: Error finding container eccfaad3e8e5e166833a1595ebf3421708fd464347d8be3eb376ae2b7b2adb93: Status 404 returned error can't find the container with id eccfaad3e8e5e166833a1595ebf3421708fd464347d8be3eb376ae2b7b2adb93 Jan 26 18:56:33 crc kubenswrapper[4770]: I0126 18:56:33.571026 4770 generic.go:334] "Generic (PLEG): container finished" podID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerID="56a6047120c4e1fa460059b022174cc832677a0a40dc1cba56fb38eb30ecad24" exitCode=0 Jan 26 18:56:33 crc kubenswrapper[4770]: I0126 18:56:33.571142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerDied","Data":"56a6047120c4e1fa460059b022174cc832677a0a40dc1cba56fb38eb30ecad24"} Jan 26 18:56:33 crc kubenswrapper[4770]: I0126 18:56:33.571827 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerStarted","Data":"eccfaad3e8e5e166833a1595ebf3421708fd464347d8be3eb376ae2b7b2adb93"} Jan 26 18:56:33 crc kubenswrapper[4770]: I0126 18:56:33.953388 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:33 crc kubenswrapper[4770]: I0126 18:56:33.953905 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jtzgz" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="registry-server" containerID="cri-o://33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128" gracePeriod=2 Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.311436 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.338148 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities\") pod \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.338187 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjftd\" (UniqueName: \"kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd\") pod \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.338262 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content\") pod \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\" (UID: \"2bc821eb-b8b9-4e8d-950a-5b67bdff51df\") " Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.338815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities" (OuterVolumeSpecName: "utilities") pod "2bc821eb-b8b9-4e8d-950a-5b67bdff51df" (UID: "2bc821eb-b8b9-4e8d-950a-5b67bdff51df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.343286 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd" (OuterVolumeSpecName: "kube-api-access-wjftd") pod "2bc821eb-b8b9-4e8d-950a-5b67bdff51df" (UID: "2bc821eb-b8b9-4e8d-950a-5b67bdff51df"). InnerVolumeSpecName "kube-api-access-wjftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.384717 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bc821eb-b8b9-4e8d-950a-5b67bdff51df" (UID: "2bc821eb-b8b9-4e8d-950a-5b67bdff51df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.439516 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.439553 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjftd\" (UniqueName: \"kubernetes.io/projected/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-kube-api-access-wjftd\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.439564 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bc821eb-b8b9-4e8d-950a-5b67bdff51df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.580259 4770 generic.go:334] "Generic (PLEG): container finished" podID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerID="33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128" exitCode=0 Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.580330 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzgz" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.580335 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerDied","Data":"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128"} Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.580896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzgz" event={"ID":"2bc821eb-b8b9-4e8d-950a-5b67bdff51df","Type":"ContainerDied","Data":"6246820a1ec343d2f4e2f7e1d07d0d8a751065b507ad9f1a885bac07747690e1"} Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.580933 4770 scope.go:117] "RemoveContainer" containerID="33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.600225 4770 scope.go:117] "RemoveContainer" containerID="e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.620940 4770 scope.go:117] "RemoveContainer" containerID="f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.629719 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.635111 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jtzgz"] Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.686379 4770 scope.go:117] "RemoveContainer" containerID="33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128" Jan 26 18:56:34 crc kubenswrapper[4770]: E0126 18:56:34.687004 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128\": container with ID starting with 33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128 not found: ID does not exist" containerID="33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.687043 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128"} err="failed to get container status \"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128\": rpc error: code = NotFound desc = could not find container \"33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128\": container with ID starting with 33523fc0d4d3e1630928f1246a7ce6ca361fdafcdc0a367d4d8f3aa86ffd2128 not found: ID does not exist" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.687069 4770 scope.go:117] "RemoveContainer" containerID="e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20" Jan 26 18:56:34 crc kubenswrapper[4770]: E0126 18:56:34.687478 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20\": container with ID starting with e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20 not found: ID does not exist" containerID="e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.687523 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20"} err="failed to get container status \"e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20\": rpc error: code = NotFound desc = could not find container \"e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20\": container with ID starting with e5fb01500d1c3fd9b761f1fcfc92111baa481374664f1fd1578439261ca1ca20 not found: ID does not exist" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.687559 4770 scope.go:117] "RemoveContainer" containerID="f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338" Jan 26 18:56:34 crc kubenswrapper[4770]: E0126 18:56:34.687835 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338\": container with ID starting with f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338 not found: ID does not exist" containerID="f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338" Jan 26 18:56:34 crc kubenswrapper[4770]: I0126 18:56:34.687857 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338"} err="failed to get container status \"f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338\": rpc error: code = NotFound desc = could not find container \"f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338\": container with ID starting with f60742f0ca9590e8ef8d5bd41dbdeae31fda685e58c0f1d0ba3eae5d4b460338 not found: ID does not exist" Jan 26 18:56:35 crc kubenswrapper[4770]: I0126 18:56:35.775964 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" path="/var/lib/kubelet/pods/2bc821eb-b8b9-4e8d-950a-5b67bdff51df/volumes" Jan 26 18:56:36 crc kubenswrapper[4770]: I0126 18:56:36.597235 4770 generic.go:334] "Generic (PLEG): container finished" podID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerID="9c6f1fc5a30def5eac94f83d64e4335b91b1f5b86c3cf322c0ffed91444479a9" exitCode=0 Jan 26 18:56:36 crc kubenswrapper[4770]: I0126 18:56:36.597284 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerDied","Data":"9c6f1fc5a30def5eac94f83d64e4335b91b1f5b86c3cf322c0ffed91444479a9"} Jan 26 18:56:37 crc kubenswrapper[4770]: I0126 18:56:37.517591 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85d868fd8c-rclln" Jan 26 18:56:37 crc kubenswrapper[4770]: I0126 18:56:37.604156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerStarted","Data":"fb1763becc935a37ec4922a9a7510cdd87321ee0a98a4a61b93b983c149b8e81"} Jan 26 18:56:37 crc kubenswrapper[4770]: I0126 18:56:37.621226 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkd2c" podStartSLOduration=2.166367274 podStartE2EDuration="5.621208803s" podCreationTimestamp="2026-01-26 18:56:32 +0000 UTC" firstStartedPulling="2026-01-26 18:56:33.572660009 +0000 UTC m=+878.137566751" lastFinishedPulling="2026-01-26 18:56:37.027501538 +0000 UTC m=+881.592408280" observedRunningTime="2026-01-26 18:56:37.617878222 +0000 UTC m=+882.182784964" watchObservedRunningTime="2026-01-26 18:56:37.621208803 +0000 UTC m=+882.186115535" Jan 26 18:56:42 crc kubenswrapper[4770]: I0126 18:56:42.705310 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:42 crc kubenswrapper[4770]: I0126 18:56:42.706028 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:42 crc kubenswrapper[4770]: I0126 18:56:42.771928 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:43 crc kubenswrapper[4770]: I0126 18:56:43.703667 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:44 crc kubenswrapper[4770]: I0126 18:56:44.156387 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:45 crc kubenswrapper[4770]: I0126 18:56:45.662792 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkd2c" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="registry-server" containerID="cri-o://fb1763becc935a37ec4922a9a7510cdd87321ee0a98a4a61b93b983c149b8e81" gracePeriod=2 Jan 26 18:56:47 crc kubenswrapper[4770]: I0126 18:56:47.700844 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerDied","Data":"fb1763becc935a37ec4922a9a7510cdd87321ee0a98a4a61b93b983c149b8e81"} Jan 26 18:56:47 crc kubenswrapper[4770]: I0126 18:56:47.700735 4770 generic.go:334] "Generic (PLEG): container finished" podID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerID="fb1763becc935a37ec4922a9a7510cdd87321ee0a98a4a61b93b983c149b8e81" exitCode=0 Jan 26 18:56:47 crc kubenswrapper[4770]: I0126 18:56:47.956291 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.024154 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities\") pod \"b559b208-2c07-4d61-a025-5cab6213f5bf\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.024231 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pthvn\" (UniqueName: \"kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn\") pod \"b559b208-2c07-4d61-a025-5cab6213f5bf\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.024346 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content\") pod \"b559b208-2c07-4d61-a025-5cab6213f5bf\" (UID: \"b559b208-2c07-4d61-a025-5cab6213f5bf\") " Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.025730 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities" (OuterVolumeSpecName: "utilities") pod "b559b208-2c07-4d61-a025-5cab6213f5bf" (UID: "b559b208-2c07-4d61-a025-5cab6213f5bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.030446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn" (OuterVolumeSpecName: "kube-api-access-pthvn") pod "b559b208-2c07-4d61-a025-5cab6213f5bf" (UID: "b559b208-2c07-4d61-a025-5cab6213f5bf"). InnerVolumeSpecName "kube-api-access-pthvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.085184 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b559b208-2c07-4d61-a025-5cab6213f5bf" (UID: "b559b208-2c07-4d61-a025-5cab6213f5bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.126618 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.127036 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pthvn\" (UniqueName: \"kubernetes.io/projected/b559b208-2c07-4d61-a025-5cab6213f5bf-kube-api-access-pthvn\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.127166 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b559b208-2c07-4d61-a025-5cab6213f5bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.710265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkd2c" event={"ID":"b559b208-2c07-4d61-a025-5cab6213f5bf","Type":"ContainerDied","Data":"eccfaad3e8e5e166833a1595ebf3421708fd464347d8be3eb376ae2b7b2adb93"} Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.710292 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkd2c" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.710579 4770 scope.go:117] "RemoveContainer" containerID="fb1763becc935a37ec4922a9a7510cdd87321ee0a98a4a61b93b983c149b8e81" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.735940 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.739970 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkd2c"] Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.741490 4770 scope.go:117] "RemoveContainer" containerID="9c6f1fc5a30def5eac94f83d64e4335b91b1f5b86c3cf322c0ffed91444479a9" Jan 26 18:56:48 crc kubenswrapper[4770]: I0126 18:56:48.771478 4770 scope.go:117] "RemoveContainer" containerID="56a6047120c4e1fa460059b022174cc832677a0a40dc1cba56fb38eb30ecad24" Jan 26 18:56:49 crc kubenswrapper[4770]: I0126 18:56:49.779222 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" path="/var/lib/kubelet/pods/b559b208-2c07-4d61-a025-5cab6213f5bf/volumes" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.132217 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-859d6f9486-gtpqr" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976049 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nkgs9"] Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976682 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976717 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976744 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="extract-utilities" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976754 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="extract-utilities" Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976767 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="extract-utilities" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976774 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="extract-utilities" Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976787 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="extract-content" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976794 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="extract-content" Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976805 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="extract-content" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976812 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="extract-content" Jan 26 18:56:57 crc kubenswrapper[4770]: E0126 18:56:57.976824 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976831 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976950 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b559b208-2c07-4d61-a025-5cab6213f5bf" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.976966 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bc821eb-b8b9-4e8d-950a-5b67bdff51df" containerName="registry-server" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.979376 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.982656 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.982928 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nk456" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.983091 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 18:56:57 crc kubenswrapper[4770]: I0126 18:56:57.986986 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz"] Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.015963 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.018618 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz"] Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.021097 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-conf\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070480 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070509 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070551 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gfj\" (UniqueName: \"kubernetes.io/projected/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-kube-api-access-m6gfj\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070590 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-reloader\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070615 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-startup\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-sockets\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics-certs\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.070712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8nc\" (UniqueName: \"kubernetes.io/projected/8f9a805c-9078-43b4-a52d-bb6c6d695422-kube-api-access-4m8nc\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.071270 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lxhr9"] Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.072394 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.075002 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lgxhp"] Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.075007 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.075258 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8mtcf" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.075353 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.075402 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.076188 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.077816 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.085491 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lgxhp"] Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics-certs\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172085 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8nc\" (UniqueName: \"kubernetes.io/projected/8f9a805c-9078-43b4-a52d-bb6c6d695422-kube-api-access-4m8nc\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172132 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172162 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cdr\" (UniqueName: \"kubernetes.io/projected/95fe3572-9eab-4945-bf35-bcf4cec1764d-kube-api-access-79cdr\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172185 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-conf\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172215 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172278 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gfj\" (UniqueName: \"kubernetes.io/projected/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-kube-api-access-m6gfj\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-reloader\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172338 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172358 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-startup\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-sockets\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wp7k\" (UniqueName: \"kubernetes.io/projected/0fa5c4a3-9cf1-470f-a627-4d75201218c6-kube-api-access-6wp7k\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172454 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-cert\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.172473 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/95fe3572-9eab-4945-bf35-bcf4cec1764d-metallb-excludel2\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.173192 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-conf\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.173391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.173470 4770 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.173516 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert podName:8f9a805c-9078-43b4-a52d-bb6c6d695422 nodeName:}" failed. No retries permitted until 2026-01-26 18:56:58.673499367 +0000 UTC m=+903.238406099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert") pod "frr-k8s-webhook-server-7df86c4f6c-n5vnz" (UID: "8f9a805c-9078-43b4-a52d-bb6c6d695422") : secret "frr-k8s-webhook-server-cert" not found Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.174063 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-sockets\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.174316 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-reloader\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.175065 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-frr-startup\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.179915 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-metrics-certs\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.191214 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gfj\" (UniqueName: \"kubernetes.io/projected/b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe-kube-api-access-m6gfj\") pod \"frr-k8s-nkgs9\" (UID: \"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe\") " pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.205711 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8nc\" (UniqueName: \"kubernetes.io/projected/8f9a805c-9078-43b4-a52d-bb6c6d695422-kube-api-access-4m8nc\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274163 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wp7k\" (UniqueName: \"kubernetes.io/projected/0fa5c4a3-9cf1-470f-a627-4d75201218c6-kube-api-access-6wp7k\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-cert\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/95fe3572-9eab-4945-bf35-bcf4cec1764d-metallb-excludel2\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274291 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.274309 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cdr\" (UniqueName: \"kubernetes.io/projected/95fe3572-9eab-4945-bf35-bcf4cec1764d-kube-api-access-79cdr\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.274633 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.274672 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist podName:95fe3572-9eab-4945-bf35-bcf4cec1764d nodeName:}" failed. No retries permitted until 2026-01-26 18:56:58.774660044 +0000 UTC m=+903.339566776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist") pod "speaker-lxhr9" (UID: "95fe3572-9eab-4945-bf35-bcf4cec1764d") : secret "metallb-memberlist" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.274821 4770 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.274842 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs podName:0fa5c4a3-9cf1-470f-a627-4d75201218c6 nodeName:}" failed. No retries permitted until 2026-01-26 18:56:58.774836069 +0000 UTC m=+903.339742791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs") pod "controller-6968d8fdc4-lgxhp" (UID: "0fa5c4a3-9cf1-470f-a627-4d75201218c6") : secret "controller-certs-secret" not found Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.275472 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/95fe3572-9eab-4945-bf35-bcf4cec1764d-metallb-excludel2\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.275530 4770 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.275553 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs podName:95fe3572-9eab-4945-bf35-bcf4cec1764d nodeName:}" failed. No retries permitted until 2026-01-26 18:56:58.775544669 +0000 UTC m=+903.340451401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs") pod "speaker-lxhr9" (UID: "95fe3572-9eab-4945-bf35-bcf4cec1764d") : secret "speaker-certs-secret" not found Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.279762 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.291535 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-cert\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.298104 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cdr\" (UniqueName: \"kubernetes.io/projected/95fe3572-9eab-4945-bf35-bcf4cec1764d-kube-api-access-79cdr\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.301062 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wp7k\" (UniqueName: \"kubernetes.io/projected/0fa5c4a3-9cf1-470f-a627-4d75201218c6-kube-api-access-6wp7k\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.345024 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.682325 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.688654 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f9a805c-9078-43b4-a52d-bb6c6d695422-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-n5vnz\" (UID: \"8f9a805c-9078-43b4-a52d-bb6c6d695422\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.783611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"f5d52bd117e7e1c4286644c2d95de2d6257f1abf7bc6678233eeae72239027ef"} Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.783868 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.784042 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.784090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.784220 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 18:56:58 crc kubenswrapper[4770]: E0126 18:56:58.784357 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist podName:95fe3572-9eab-4945-bf35-bcf4cec1764d nodeName:}" failed. No retries permitted until 2026-01-26 18:56:59.78433795 +0000 UTC m=+904.349244672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist") pod "speaker-lxhr9" (UID: "95fe3572-9eab-4945-bf35-bcf4cec1764d") : secret "metallb-memberlist" not found Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.788748 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-metrics-certs\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.789080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0fa5c4a3-9cf1-470f-a627-4d75201218c6-metrics-certs\") pod \"controller-6968d8fdc4-lgxhp\" (UID: \"0fa5c4a3-9cf1-470f-a627-4d75201218c6\") " pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:58 crc kubenswrapper[4770]: I0126 18:56:58.963289 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.008858 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.180438 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz"] Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.293188 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lgxhp"] Jan 26 18:56:59 crc kubenswrapper[4770]: W0126 18:56:59.296252 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa5c4a3_9cf1_470f_a627_4d75201218c6.slice/crio-6583d401ef3362d02ed82db6ec72c252b182a0d970e997afd37533d784725e19 WatchSource:0}: Error finding container 6583d401ef3362d02ed82db6ec72c252b182a0d970e997afd37533d784725e19: Status 404 returned error can't find the container with id 6583d401ef3362d02ed82db6ec72c252b182a0d970e997afd37533d784725e19 Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.791089 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" event={"ID":"8f9a805c-9078-43b4-a52d-bb6c6d695422","Type":"ContainerStarted","Data":"fdf031184e02f1c61cb2e54a55333e01b0227de900969cb4dc054ae3a70b0c12"} Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.792822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgxhp" event={"ID":"0fa5c4a3-9cf1-470f-a627-4d75201218c6","Type":"ContainerStarted","Data":"f2601b9fe40d543dcb42da2e6462dea4c763f4176a556d8d00597e9e5787f1d7"} Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.792860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgxhp" event={"ID":"0fa5c4a3-9cf1-470f-a627-4d75201218c6","Type":"ContainerStarted","Data":"3bc99b2423c699987bb28550920d409747d941366642703fd38af8f2f3f24a64"} Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.792871 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgxhp" event={"ID":"0fa5c4a3-9cf1-470f-a627-4d75201218c6","Type":"ContainerStarted","Data":"6583d401ef3362d02ed82db6ec72c252b182a0d970e997afd37533d784725e19"} Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.792998 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.820090 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lgxhp" podStartSLOduration=1.820066759 podStartE2EDuration="1.820066759s" podCreationTimestamp="2026-01-26 18:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:56:59.81391446 +0000 UTC m=+904.378821202" watchObservedRunningTime="2026-01-26 18:56:59.820066759 +0000 UTC m=+904.384973501" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.832306 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.841568 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/95fe3572-9eab-4945-bf35-bcf4cec1764d-memberlist\") pod \"speaker-lxhr9\" (UID: \"95fe3572-9eab-4945-bf35-bcf4cec1764d\") " pod="metallb-system/speaker-lxhr9" Jan 26 18:56:59 crc kubenswrapper[4770]: I0126 18:56:59.887764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lxhr9" Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.330394 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.330734 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.802322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxhr9" event={"ID":"95fe3572-9eab-4945-bf35-bcf4cec1764d","Type":"ContainerStarted","Data":"cb8f691996e662b60277b7f8ca1c421eb0511c94a8527dad5d540f61d6f397d1"} Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.802367 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxhr9" event={"ID":"95fe3572-9eab-4945-bf35-bcf4cec1764d","Type":"ContainerStarted","Data":"3115d4281fcc647905209f8e355de71a7f56ad89e4d629c2d14849f08b0d950d"} Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.802378 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lxhr9" event={"ID":"95fe3572-9eab-4945-bf35-bcf4cec1764d","Type":"ContainerStarted","Data":"a0975fc1e7bda4f693d073e5cd455d171a3424554231503ca64b0a338e965e87"} Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.802630 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lxhr9" Jan 26 18:57:00 crc kubenswrapper[4770]: I0126 18:57:00.820117 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lxhr9" podStartSLOduration=2.820098812 podStartE2EDuration="2.820098812s" podCreationTimestamp="2026-01-26 18:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:57:00.815846055 +0000 UTC m=+905.380752787" watchObservedRunningTime="2026-01-26 18:57:00.820098812 +0000 UTC m=+905.385005544" Jan 26 18:57:05 crc kubenswrapper[4770]: I0126 18:57:05.838755 4770 generic.go:334] "Generic (PLEG): container finished" podID="b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe" containerID="124f6afd09ae12f0758af1fb599e09b039a328835192035e391b713c7ea03803" exitCode=0 Jan 26 18:57:05 crc kubenswrapper[4770]: I0126 18:57:05.838853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerDied","Data":"124f6afd09ae12f0758af1fb599e09b039a328835192035e391b713c7ea03803"} Jan 26 18:57:05 crc kubenswrapper[4770]: I0126 18:57:05.848441 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" event={"ID":"8f9a805c-9078-43b4-a52d-bb6c6d695422","Type":"ContainerStarted","Data":"577b94d6d8fd3068ac331ac18fae74f0511716a110cd8cce13df2e5b0a5ba70b"} Jan 26 18:57:05 crc kubenswrapper[4770]: I0126 18:57:05.848585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:57:05 crc kubenswrapper[4770]: I0126 18:57:05.898103 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" podStartSLOduration=2.663880694 podStartE2EDuration="8.898081613s" podCreationTimestamp="2026-01-26 18:56:57 +0000 UTC" firstStartedPulling="2026-01-26 18:56:59.233932051 +0000 UTC m=+903.798838773" lastFinishedPulling="2026-01-26 18:57:05.46813296 +0000 UTC m=+910.033039692" observedRunningTime="2026-01-26 18:57:05.891931325 +0000 UTC m=+910.456838067" watchObservedRunningTime="2026-01-26 18:57:05.898081613 +0000 UTC m=+910.462988345" Jan 26 18:57:06 crc kubenswrapper[4770]: I0126 18:57:06.857109 4770 generic.go:334] "Generic (PLEG): container finished" podID="b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe" containerID="c8c3b0edaaf148c8251d3c3b26363ff3eb60bf247403e6f7a8859e594ce1ef37" exitCode=0 Jan 26 18:57:06 crc kubenswrapper[4770]: I0126 18:57:06.857172 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerDied","Data":"c8c3b0edaaf148c8251d3c3b26363ff3eb60bf247403e6f7a8859e594ce1ef37"} Jan 26 18:57:07 crc kubenswrapper[4770]: I0126 18:57:07.866360 4770 generic.go:334] "Generic (PLEG): container finished" podID="b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe" containerID="135afb8b4f273bf76f819e2cfadb3a3d869a73e0dd0fd586cd14d304c69368be" exitCode=0 Jan 26 18:57:07 crc kubenswrapper[4770]: I0126 18:57:07.866409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerDied","Data":"135afb8b4f273bf76f819e2cfadb3a3d869a73e0dd0fd586cd14d304c69368be"} Jan 26 18:57:08 crc kubenswrapper[4770]: I0126 18:57:08.882232 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"6052e21378c9b44a2ad0bfcd65ee155728fb376db2cc045eb64e7abacef19b3d"} Jan 26 18:57:08 crc kubenswrapper[4770]: I0126 18:57:08.883168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"ca7e9227df341c36b7dd264dcebb9dd02a738555376bca6ef434b4f2ea0e1557"} Jan 26 18:57:08 crc kubenswrapper[4770]: I0126 18:57:08.883508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"4f4fc630add4d3fc81ca9c5cf2907bd11f9782e2addfe521126655993801edb3"} Jan 26 18:57:08 crc kubenswrapper[4770]: I0126 18:57:08.883684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"8f61b532100362aa6273bb5b8f5895824f8664e4d59edc812985f24d070b81c9"} Jan 26 18:57:08 crc kubenswrapper[4770]: I0126 18:57:08.883779 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"e62f6759f7892819c8e75970bd082d832c90a4a4085b288bf392e6ad64104e7c"} Jan 26 18:57:09 crc kubenswrapper[4770]: I0126 18:57:09.018063 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lgxhp" Jan 26 18:57:09 crc kubenswrapper[4770]: I0126 18:57:09.894592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkgs9" event={"ID":"b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe","Type":"ContainerStarted","Data":"e53800fb18550ba20f5fdb09c218af2b83636bc8c49c8d0c203c3b0276e97297"} Jan 26 18:57:09 crc kubenswrapper[4770]: I0126 18:57:09.895081 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:57:09 crc kubenswrapper[4770]: I0126 18:57:09.916427 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nkgs9" podStartSLOduration=5.9215626409999995 podStartE2EDuration="12.916399721s" podCreationTimestamp="2026-01-26 18:56:57 +0000 UTC" firstStartedPulling="2026-01-26 18:56:58.457181488 +0000 UTC m=+903.022088230" lastFinishedPulling="2026-01-26 18:57:05.452018578 +0000 UTC m=+910.016925310" observedRunningTime="2026-01-26 18:57:09.91343389 +0000 UTC m=+914.478340622" watchObservedRunningTime="2026-01-26 18:57:09.916399721 +0000 UTC m=+914.481306453" Jan 26 18:57:13 crc kubenswrapper[4770]: I0126 18:57:13.345756 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:57:13 crc kubenswrapper[4770]: I0126 18:57:13.409292 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:57:18 crc kubenswrapper[4770]: I0126 18:57:18.347866 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nkgs9" Jan 26 18:57:18 crc kubenswrapper[4770]: I0126 18:57:18.970073 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-n5vnz" Jan 26 18:57:19 crc kubenswrapper[4770]: I0126 18:57:19.892077 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lxhr9" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.293573 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.294889 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.297380 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-sjgjj" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.297456 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.298270 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.314332 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.477307 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pg85\" (UniqueName: \"kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85\") pod \"openstack-operator-index-s7zlp\" (UID: \"37d872c0-99f7-41e5-adc7-409f2c4539e2\") " pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.578801 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pg85\" (UniqueName: \"kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85\") pod \"openstack-operator-index-s7zlp\" (UID: \"37d872c0-99f7-41e5-adc7-409f2c4539e2\") " pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.615648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pg85\" (UniqueName: \"kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85\") pod \"openstack-operator-index-s7zlp\" (UID: \"37d872c0-99f7-41e5-adc7-409f2c4539e2\") " pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.621098 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.849111 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:23 crc kubenswrapper[4770]: W0126 18:57:23.862007 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d872c0_99f7_41e5_adc7_409f2c4539e2.slice/crio-34d741acba11f33b7dff54a3d7ac066299bba35a4078c87a09d7e693d4ce173b WatchSource:0}: Error finding container 34d741acba11f33b7dff54a3d7ac066299bba35a4078c87a09d7e693d4ce173b: Status 404 returned error can't find the container with id 34d741acba11f33b7dff54a3d7ac066299bba35a4078c87a09d7e693d4ce173b Jan 26 18:57:23 crc kubenswrapper[4770]: I0126 18:57:23.997915 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7zlp" event={"ID":"37d872c0-99f7-41e5-adc7-409f2c4539e2","Type":"ContainerStarted","Data":"34d741acba11f33b7dff54a3d7ac066299bba35a4078c87a09d7e693d4ce173b"} Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.016182 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7zlp" event={"ID":"37d872c0-99f7-41e5-adc7-409f2c4539e2","Type":"ContainerStarted","Data":"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1"} Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.041561 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s7zlp" podStartSLOduration=1.184464549 podStartE2EDuration="3.041536121s" podCreationTimestamp="2026-01-26 18:57:23 +0000 UTC" firstStartedPulling="2026-01-26 18:57:23.863564927 +0000 UTC m=+928.428471669" lastFinishedPulling="2026-01-26 18:57:25.720636499 +0000 UTC m=+930.285543241" observedRunningTime="2026-01-26 18:57:26.035393172 +0000 UTC m=+930.600299974" watchObservedRunningTime="2026-01-26 18:57:26.041536121 +0000 UTC m=+930.606442883" Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.071195 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.672639 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cttfq"] Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.673825 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.692622 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cttfq"] Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.732670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2wz\" (UniqueName: \"kubernetes.io/projected/9093abfb-eda1-4bea-a7c8-1610996eec7c-kube-api-access-7l2wz\") pod \"openstack-operator-index-cttfq\" (UID: \"9093abfb-eda1-4bea-a7c8-1610996eec7c\") " pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.833920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2wz\" (UniqueName: \"kubernetes.io/projected/9093abfb-eda1-4bea-a7c8-1610996eec7c-kube-api-access-7l2wz\") pod \"openstack-operator-index-cttfq\" (UID: \"9093abfb-eda1-4bea-a7c8-1610996eec7c\") " pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.859582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2wz\" (UniqueName: \"kubernetes.io/projected/9093abfb-eda1-4bea-a7c8-1610996eec7c-kube-api-access-7l2wz\") pod \"openstack-operator-index-cttfq\" (UID: \"9093abfb-eda1-4bea-a7c8-1610996eec7c\") " pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:26 crc kubenswrapper[4770]: I0126 18:57:26.991693 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:27 crc kubenswrapper[4770]: I0126 18:57:27.482124 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cttfq"] Jan 26 18:57:27 crc kubenswrapper[4770]: W0126 18:57:27.490934 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9093abfb_eda1_4bea_a7c8_1610996eec7c.slice/crio-54a5ca98a6f11963460a24122e0b20ce30c8cb5d06f9fcbcdfe4664b831c56f0 WatchSource:0}: Error finding container 54a5ca98a6f11963460a24122e0b20ce30c8cb5d06f9fcbcdfe4664b831c56f0: Status 404 returned error can't find the container with id 54a5ca98a6f11963460a24122e0b20ce30c8cb5d06f9fcbcdfe4664b831c56f0 Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.032395 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cttfq" event={"ID":"9093abfb-eda1-4bea-a7c8-1610996eec7c","Type":"ContainerStarted","Data":"989c35365d4892492b8871fc3d795796085144ab26c005fed9b9ff4963138812"} Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.032860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cttfq" event={"ID":"9093abfb-eda1-4bea-a7c8-1610996eec7c","Type":"ContainerStarted","Data":"54a5ca98a6f11963460a24122e0b20ce30c8cb5d06f9fcbcdfe4664b831c56f0"} Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.032544 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-s7zlp" podUID="37d872c0-99f7-41e5-adc7-409f2c4539e2" containerName="registry-server" containerID="cri-o://0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1" gracePeriod=2 Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.473988 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.493941 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cttfq" podStartSLOduration=2.436610823 podStartE2EDuration="2.493844949s" podCreationTimestamp="2026-01-26 18:57:26 +0000 UTC" firstStartedPulling="2026-01-26 18:57:27.495984736 +0000 UTC m=+932.060891478" lastFinishedPulling="2026-01-26 18:57:27.553218872 +0000 UTC m=+932.118125604" observedRunningTime="2026-01-26 18:57:28.056284907 +0000 UTC m=+932.621191669" watchObservedRunningTime="2026-01-26 18:57:28.493844949 +0000 UTC m=+933.058751711" Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.659319 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pg85\" (UniqueName: \"kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85\") pod \"37d872c0-99f7-41e5-adc7-409f2c4539e2\" (UID: \"37d872c0-99f7-41e5-adc7-409f2c4539e2\") " Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.666411 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85" (OuterVolumeSpecName: "kube-api-access-6pg85") pod "37d872c0-99f7-41e5-adc7-409f2c4539e2" (UID: "37d872c0-99f7-41e5-adc7-409f2c4539e2"). InnerVolumeSpecName "kube-api-access-6pg85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:28 crc kubenswrapper[4770]: I0126 18:57:28.760985 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pg85\" (UniqueName: \"kubernetes.io/projected/37d872c0-99f7-41e5-adc7-409f2c4539e2-kube-api-access-6pg85\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.043105 4770 generic.go:334] "Generic (PLEG): container finished" podID="37d872c0-99f7-41e5-adc7-409f2c4539e2" containerID="0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1" exitCode=0 Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.043238 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7zlp" Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.043782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7zlp" event={"ID":"37d872c0-99f7-41e5-adc7-409f2c4539e2","Type":"ContainerDied","Data":"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1"} Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.044185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7zlp" event={"ID":"37d872c0-99f7-41e5-adc7-409f2c4539e2","Type":"ContainerDied","Data":"34d741acba11f33b7dff54a3d7ac066299bba35a4078c87a09d7e693d4ce173b"} Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.044233 4770 scope.go:117] "RemoveContainer" containerID="0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1" Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.073218 4770 scope.go:117] "RemoveContainer" containerID="0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1" Jan 26 18:57:29 crc kubenswrapper[4770]: E0126 18:57:29.073963 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1\": container with ID starting with 0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1 not found: ID does not exist" containerID="0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1" Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.074034 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1"} err="failed to get container status \"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1\": rpc error: code = NotFound desc = could not find container \"0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1\": container with ID starting with 0f487e7a04a0a62c3880db507b71b9422474eb515a1b9415662e73fa29c51fc1 not found: ID does not exist" Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.085282 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.089788 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-s7zlp"] Jan 26 18:57:29 crc kubenswrapper[4770]: I0126 18:57:29.777950 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d872c0-99f7-41e5-adc7-409f2c4539e2" path="/var/lib/kubelet/pods/37d872c0-99f7-41e5-adc7-409f2c4539e2/volumes" Jan 26 18:57:30 crc kubenswrapper[4770]: I0126 18:57:30.330793 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:57:30 crc kubenswrapper[4770]: I0126 18:57:30.331161 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:57:36 crc kubenswrapper[4770]: I0126 18:57:36.992941 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:36 crc kubenswrapper[4770]: I0126 18:57:36.993890 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:37 crc kubenswrapper[4770]: I0126 18:57:37.058113 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:37 crc kubenswrapper[4770]: I0126 18:57:37.134675 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cttfq" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.163779 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv"] Jan 26 18:57:38 crc kubenswrapper[4770]: E0126 18:57:38.164305 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d872c0-99f7-41e5-adc7-409f2c4539e2" containerName="registry-server" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.164321 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d872c0-99f7-41e5-adc7-409f2c4539e2" containerName="registry-server" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.164486 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d872c0-99f7-41e5-adc7-409f2c4539e2" containerName="registry-server" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.165539 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.168632 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tnntk" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.180438 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv"] Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.321094 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.321257 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.321360 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7jc\" (UniqueName: \"kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.423167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7jc\" (UniqueName: \"kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.423314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.423370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.423832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.423902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.451642 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7jc\" (UniqueName: \"kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc\") pod \"77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.521677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:38 crc kubenswrapper[4770]: I0126 18:57:38.963909 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv"] Jan 26 18:57:39 crc kubenswrapper[4770]: I0126 18:57:39.117048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" event={"ID":"647d65c7-b9da-4084-b0eb-8d0867785785","Type":"ContainerStarted","Data":"9d14d2f81a82781c158bd83486c9e0a876d45080c1f191106e8513bae7633c50"} Jan 26 18:57:40 crc kubenswrapper[4770]: I0126 18:57:40.125930 4770 generic.go:334] "Generic (PLEG): container finished" podID="647d65c7-b9da-4084-b0eb-8d0867785785" containerID="6c0d377d00602bb7136deed47e64b81829693be76a7d5755f431d64d387d9dc8" exitCode=0 Jan 26 18:57:40 crc kubenswrapper[4770]: I0126 18:57:40.125980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" event={"ID":"647d65c7-b9da-4084-b0eb-8d0867785785","Type":"ContainerDied","Data":"6c0d377d00602bb7136deed47e64b81829693be76a7d5755f431d64d387d9dc8"} Jan 26 18:57:41 crc kubenswrapper[4770]: I0126 18:57:41.136633 4770 generic.go:334] "Generic (PLEG): container finished" podID="647d65c7-b9da-4084-b0eb-8d0867785785" containerID="0d58fc173c8ef56a2867c75010cbf625d9b5637af6c026f3758be036957c81b8" exitCode=0 Jan 26 18:57:41 crc kubenswrapper[4770]: I0126 18:57:41.136845 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" event={"ID":"647d65c7-b9da-4084-b0eb-8d0867785785","Type":"ContainerDied","Data":"0d58fc173c8ef56a2867c75010cbf625d9b5637af6c026f3758be036957c81b8"} Jan 26 18:57:42 crc kubenswrapper[4770]: I0126 18:57:42.146831 4770 generic.go:334] "Generic (PLEG): container finished" podID="647d65c7-b9da-4084-b0eb-8d0867785785" containerID="6c1bfd48f616372a45ecb398474e4ed87690679939944f5ed19d237ee45c8532" exitCode=0 Jan 26 18:57:42 crc kubenswrapper[4770]: I0126 18:57:42.146906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" event={"ID":"647d65c7-b9da-4084-b0eb-8d0867785785","Type":"ContainerDied","Data":"6c1bfd48f616372a45ecb398474e4ed87690679939944f5ed19d237ee45c8532"} Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.521474 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.703636 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d7jc\" (UniqueName: \"kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc\") pod \"647d65c7-b9da-4084-b0eb-8d0867785785\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.703720 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util\") pod \"647d65c7-b9da-4084-b0eb-8d0867785785\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.703806 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle\") pod \"647d65c7-b9da-4084-b0eb-8d0867785785\" (UID: \"647d65c7-b9da-4084-b0eb-8d0867785785\") " Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.704633 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle" (OuterVolumeSpecName: "bundle") pod "647d65c7-b9da-4084-b0eb-8d0867785785" (UID: "647d65c7-b9da-4084-b0eb-8d0867785785"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.712603 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc" (OuterVolumeSpecName: "kube-api-access-6d7jc") pod "647d65c7-b9da-4084-b0eb-8d0867785785" (UID: "647d65c7-b9da-4084-b0eb-8d0867785785"). InnerVolumeSpecName "kube-api-access-6d7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.724385 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util" (OuterVolumeSpecName: "util") pod "647d65c7-b9da-4084-b0eb-8d0867785785" (UID: "647d65c7-b9da-4084-b0eb-8d0867785785"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.805444 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d7jc\" (UniqueName: \"kubernetes.io/projected/647d65c7-b9da-4084-b0eb-8d0867785785-kube-api-access-6d7jc\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.805490 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-util\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:43 crc kubenswrapper[4770]: I0126 18:57:43.805505 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/647d65c7-b9da-4084-b0eb-8d0867785785-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 18:57:44 crc kubenswrapper[4770]: I0126 18:57:44.165773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" event={"ID":"647d65c7-b9da-4084-b0eb-8d0867785785","Type":"ContainerDied","Data":"9d14d2f81a82781c158bd83486c9e0a876d45080c1f191106e8513bae7633c50"} Jan 26 18:57:44 crc kubenswrapper[4770]: I0126 18:57:44.165856 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d14d2f81a82781c158bd83486c9e0a876d45080c1f191106e8513bae7633c50" Jan 26 18:57:44 crc kubenswrapper[4770]: I0126 18:57:44.165904 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.836033 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr"] Jan 26 18:57:50 crc kubenswrapper[4770]: E0126 18:57:50.836647 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="extract" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.836665 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="extract" Jan 26 18:57:50 crc kubenswrapper[4770]: E0126 18:57:50.836680 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="util" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.836687 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="util" Jan 26 18:57:50 crc kubenswrapper[4770]: E0126 18:57:50.836730 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="pull" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.836739 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="pull" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.836874 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="647d65c7-b9da-4084-b0eb-8d0867785785" containerName="extract" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.837358 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.840721 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-dfrh4" Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.884183 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr"] Jan 26 18:57:50 crc kubenswrapper[4770]: I0126 18:57:50.929237 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27tz\" (UniqueName: \"kubernetes.io/projected/b2b075a6-2519-42f2-876d-c0249db54ca4-kube-api-access-f27tz\") pod \"openstack-operator-controller-init-5bf847bbdc-9phhr\" (UID: \"b2b075a6-2519-42f2-876d-c0249db54ca4\") " pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:51 crc kubenswrapper[4770]: I0126 18:57:51.031111 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27tz\" (UniqueName: \"kubernetes.io/projected/b2b075a6-2519-42f2-876d-c0249db54ca4-kube-api-access-f27tz\") pod \"openstack-operator-controller-init-5bf847bbdc-9phhr\" (UID: \"b2b075a6-2519-42f2-876d-c0249db54ca4\") " pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:51 crc kubenswrapper[4770]: I0126 18:57:51.065766 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27tz\" (UniqueName: \"kubernetes.io/projected/b2b075a6-2519-42f2-876d-c0249db54ca4-kube-api-access-f27tz\") pod \"openstack-operator-controller-init-5bf847bbdc-9phhr\" (UID: \"b2b075a6-2519-42f2-876d-c0249db54ca4\") " pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:51 crc kubenswrapper[4770]: I0126 18:57:51.166601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:51 crc kubenswrapper[4770]: I0126 18:57:51.468323 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr"] Jan 26 18:57:52 crc kubenswrapper[4770]: I0126 18:57:52.226478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" event={"ID":"b2b075a6-2519-42f2-876d-c0249db54ca4","Type":"ContainerStarted","Data":"ccd7941ac08d1fa86437b04e5af26f310671b099350b5af064a38c2def9f772f"} Jan 26 18:57:57 crc kubenswrapper[4770]: I0126 18:57:57.271012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" event={"ID":"b2b075a6-2519-42f2-876d-c0249db54ca4","Type":"ContainerStarted","Data":"47ca78f2f3a76cba2559a02a4a19bfaa87cd2f51963260a81c7d80f9398b8024"} Jan 26 18:57:57 crc kubenswrapper[4770]: I0126 18:57:57.271595 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:57:57 crc kubenswrapper[4770]: I0126 18:57:57.305509 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" podStartSLOduration=2.248583786 podStartE2EDuration="7.305485531s" podCreationTimestamp="2026-01-26 18:57:50 +0000 UTC" firstStartedPulling="2026-01-26 18:57:51.476088619 +0000 UTC m=+956.040995351" lastFinishedPulling="2026-01-26 18:57:56.532990354 +0000 UTC m=+961.097897096" observedRunningTime="2026-01-26 18:57:57.300457873 +0000 UTC m=+961.865364605" watchObservedRunningTime="2026-01-26 18:57:57.305485531 +0000 UTC m=+961.870392283" Jan 26 18:58:00 crc kubenswrapper[4770]: I0126 18:58:00.331338 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 18:58:00 crc kubenswrapper[4770]: I0126 18:58:00.331825 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 18:58:00 crc kubenswrapper[4770]: I0126 18:58:00.331896 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 18:58:00 crc kubenswrapper[4770]: I0126 18:58:00.332743 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 18:58:00 crc kubenswrapper[4770]: I0126 18:58:00.332805 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef" gracePeriod=600 Jan 26 18:58:01 crc kubenswrapper[4770]: I0126 18:58:01.170330 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bf847bbdc-9phhr" Jan 26 18:58:01 crc kubenswrapper[4770]: I0126 18:58:01.298352 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef" exitCode=0 Jan 26 18:58:01 crc kubenswrapper[4770]: I0126 18:58:01.298393 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef"} Jan 26 18:58:01 crc kubenswrapper[4770]: I0126 18:58:01.298417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0"} Jan 26 18:58:01 crc kubenswrapper[4770]: I0126 18:58:01.298432 4770 scope.go:117] "RemoveContainer" containerID="a472ada11cc8156b8c652f50413b2cfc3ca2807a990cd33cf00079d10d205fee" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.508089 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.509403 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.511572 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-klrdq" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.513927 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.515108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.519727 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wfxsm" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.532564 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.541644 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.542426 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.546508 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tpxhp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.549177 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.572506 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.573552 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.577339 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xvnnk" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.581460 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.595688 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.616775 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.617539 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.620421 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qxqxg" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.634872 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.635845 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.640355 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bmqjh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.640587 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.660403 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.685261 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhh2\" (UniqueName: \"kubernetes.io/projected/7dfabc71-10aa-4337-a700-6dda2a4819d5-kube-api-access-lhhh2\") pod \"designate-operator-controller-manager-b45d7bf98-gwg5f\" (UID: \"7dfabc71-10aa-4337-a700-6dda2a4819d5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.685351 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffj6g\" (UniqueName: \"kubernetes.io/projected/dc15189d-c78f-475d-9a49-dac90d4d4fcb-kube-api-access-ffj6g\") pod \"cinder-operator-controller-manager-7478f7dbf9-g9nzc\" (UID: \"dc15189d-c78f-475d-9a49-dac90d4d4fcb\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.685393 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4pl\" (UniqueName: \"kubernetes.io/projected/99b8587f-51d1-4cb2-a0ab-e131c9135388-kube-api-access-pt4pl\") pod \"glance-operator-controller-manager-78fdd796fd-h2zrp\" (UID: \"99b8587f-51d1-4cb2-a0ab-e131c9135388\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.685430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5skf\" (UniqueName: \"kubernetes.io/projected/1666ea4c-3865-4bc2-8741-29383616e875-kube-api-access-q5skf\") pod \"barbican-operator-controller-manager-7f86f8796f-x8m5l\" (UID: \"1666ea4c-3865-4bc2-8741-29383616e875\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.687747 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.688603 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.691545 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p7258" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.691966 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.699864 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.700678 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.701808 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bmwds" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.707635 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.714406 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.723876 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.724943 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.727467 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dr6zf" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.727686 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.728638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.754674 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-562cg" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786521 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffj6g\" (UniqueName: \"kubernetes.io/projected/dc15189d-c78f-475d-9a49-dac90d4d4fcb-kube-api-access-ffj6g\") pod \"cinder-operator-controller-manager-7478f7dbf9-g9nzc\" (UID: \"dc15189d-c78f-475d-9a49-dac90d4d4fcb\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786553 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwtj\" (UniqueName: \"kubernetes.io/projected/462ae2ba-a49e-4eb3-9d7e-0a853412206f-kube-api-access-rgwtj\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786586 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt4pl\" (UniqueName: \"kubernetes.io/projected/99b8587f-51d1-4cb2-a0ab-e131c9135388-kube-api-access-pt4pl\") pod \"glance-operator-controller-manager-78fdd796fd-h2zrp\" (UID: \"99b8587f-51d1-4cb2-a0ab-e131c9135388\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786612 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5skf\" (UniqueName: \"kubernetes.io/projected/1666ea4c-3865-4bc2-8741-29383616e875-kube-api-access-q5skf\") pod \"barbican-operator-controller-manager-7f86f8796f-x8m5l\" (UID: \"1666ea4c-3865-4bc2-8741-29383616e875\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfh9\" (UniqueName: \"kubernetes.io/projected/cc595d5d-2f69-47a8-a63f-7b4abce23fdd-kube-api-access-8vfh9\") pod \"horizon-operator-controller-manager-77d5c5b54f-zn9m9\" (UID: \"cc595d5d-2f69-47a8-a63f-7b4abce23fdd\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786660 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwht\" (UniqueName: \"kubernetes.io/projected/0e7b29c5-2473-488f-a8cf-57863472bd68-kube-api-access-8hwht\") pod \"ironic-operator-controller-manager-598f7747c9-jg69w\" (UID: \"0e7b29c5-2473-488f-a8cf-57863472bd68\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786688 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhxp\" (UniqueName: \"kubernetes.io/projected/c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f-kube-api-access-7mhxp\") pod \"heat-operator-controller-manager-594c8c9d5d-g4brh\" (UID: \"c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786726 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhhh2\" (UniqueName: \"kubernetes.io/projected/7dfabc71-10aa-4337-a700-6dda2a4819d5-kube-api-access-lhhh2\") pod \"designate-operator-controller-manager-b45d7bf98-gwg5f\" (UID: \"7dfabc71-10aa-4337-a700-6dda2a4819d5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.786748 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.799217 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.800010 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.804280 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.811066 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.816556 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.817461 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.820528 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.831680 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cwz9x" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.831854 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g4v2l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.835273 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt4pl\" (UniqueName: \"kubernetes.io/projected/99b8587f-51d1-4cb2-a0ab-e131c9135388-kube-api-access-pt4pl\") pod \"glance-operator-controller-manager-78fdd796fd-h2zrp\" (UID: \"99b8587f-51d1-4cb2-a0ab-e131c9135388\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.838244 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhhh2\" (UniqueName: \"kubernetes.io/projected/7dfabc71-10aa-4337-a700-6dda2a4819d5-kube-api-access-lhhh2\") pod \"designate-operator-controller-manager-b45d7bf98-gwg5f\" (UID: \"7dfabc71-10aa-4337-a700-6dda2a4819d5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.838648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5skf\" (UniqueName: \"kubernetes.io/projected/1666ea4c-3865-4bc2-8741-29383616e875-kube-api-access-q5skf\") pod \"barbican-operator-controller-manager-7f86f8796f-x8m5l\" (UID: \"1666ea4c-3865-4bc2-8741-29383616e875\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.840381 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.840889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffj6g\" (UniqueName: \"kubernetes.io/projected/dc15189d-c78f-475d-9a49-dac90d4d4fcb-kube-api-access-ffj6g\") pod \"cinder-operator-controller-manager-7478f7dbf9-g9nzc\" (UID: \"dc15189d-c78f-475d-9a49-dac90d4d4fcb\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.841151 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.853778 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.854735 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.858685 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z26wk" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.859718 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.862717 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.865974 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dkzvq" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.866422 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.871179 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.877474 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdg7g\" (UniqueName: \"kubernetes.io/projected/d427e158-3f69-44b8-abe3-1510fb4fdd1e-kube-api-access-rdg7g\") pod \"neutron-operator-controller-manager-78d58447c5-4bpjq\" (UID: \"d427e158-3f69-44b8-abe3-1510fb4fdd1e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889260 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vfh9\" (UniqueName: \"kubernetes.io/projected/cc595d5d-2f69-47a8-a63f-7b4abce23fdd-kube-api-access-8vfh9\") pod \"horizon-operator-controller-manager-77d5c5b54f-zn9m9\" (UID: \"cc595d5d-2f69-47a8-a63f-7b4abce23fdd\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889284 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwht\" (UniqueName: \"kubernetes.io/projected/0e7b29c5-2473-488f-a8cf-57863472bd68-kube-api-access-8hwht\") pod \"ironic-operator-controller-manager-598f7747c9-jg69w\" (UID: \"0e7b29c5-2473-488f-a8cf-57863472bd68\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s98z\" (UniqueName: \"kubernetes.io/projected/7ac27e32-922a-4a46-9bb3-a3daa301dee7-kube-api-access-5s98z\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n\" (UID: \"7ac27e32-922a-4a46-9bb3-a3daa301dee7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889340 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhxp\" (UniqueName: \"kubernetes.io/projected/c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f-kube-api-access-7mhxp\") pod \"heat-operator-controller-manager-594c8c9d5d-g4brh\" (UID: \"c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889359 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbphz\" (UniqueName: \"kubernetes.io/projected/68c5aef7-2f00-4a28-8a25-6af0a5cd4013-kube-api-access-jbphz\") pod \"keystone-operator-controller-manager-b8b6d4659-v9wk4\" (UID: \"68c5aef7-2f00-4a28-8a25-6af0a5cd4013\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889442 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwtj\" (UniqueName: \"kubernetes.io/projected/462ae2ba-a49e-4eb3-9d7e-0a853412206f-kube-api-access-rgwtj\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.889466 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqhhf\" (UniqueName: \"kubernetes.io/projected/444d3be6-b12b-4473-abff-a5e5f35af270-kube-api-access-bqhhf\") pod \"manila-operator-controller-manager-78c6999f6f-58zsz\" (UID: \"444d3be6-b12b-4473-abff-a5e5f35af270\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:21 crc kubenswrapper[4770]: E0126 18:58:21.890126 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:21 crc kubenswrapper[4770]: E0126 18:58:21.890279 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:22.390258965 +0000 UTC m=+986.955165697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.899915 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.914812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwht\" (UniqueName: \"kubernetes.io/projected/0e7b29c5-2473-488f-a8cf-57863472bd68-kube-api-access-8hwht\") pod \"ironic-operator-controller-manager-598f7747c9-jg69w\" (UID: \"0e7b29c5-2473-488f-a8cf-57863472bd68\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.917658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.919739 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vfh9\" (UniqueName: \"kubernetes.io/projected/cc595d5d-2f69-47a8-a63f-7b4abce23fdd-kube-api-access-8vfh9\") pod \"horizon-operator-controller-manager-77d5c5b54f-zn9m9\" (UID: \"cc595d5d-2f69-47a8-a63f-7b4abce23fdd\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.921234 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.922384 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.923880 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-bszwx" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.925285 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwtj\" (UniqueName: \"kubernetes.io/projected/462ae2ba-a49e-4eb3-9d7e-0a853412206f-kube-api-access-rgwtj\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.928782 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.929692 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.930365 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhxp\" (UniqueName: \"kubernetes.io/projected/c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f-kube-api-access-7mhxp\") pod \"heat-operator-controller-manager-594c8c9d5d-g4brh\" (UID: \"c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.933253 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tq982" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.939769 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.940642 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.948098 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.958029 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.963817 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.976871 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm"] Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.977964 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990144 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4b96b" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990197 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b47r\" (UniqueName: \"kubernetes.io/projected/ffc82616-ae6f-4f03-9c55-c235cd7cb5ff-kube-api-access-9b47r\") pod \"octavia-operator-controller-manager-5f4cd88d46-8wtk6\" (UID: \"ffc82616-ae6f-4f03-9c55-c235cd7cb5ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990243 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqhhf\" (UniqueName: \"kubernetes.io/projected/444d3be6-b12b-4473-abff-a5e5f35af270-kube-api-access-bqhhf\") pod \"manila-operator-controller-manager-78c6999f6f-58zsz\" (UID: \"444d3be6-b12b-4473-abff-a5e5f35af270\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990283 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdg7g\" (UniqueName: \"kubernetes.io/projected/d427e158-3f69-44b8-abe3-1510fb4fdd1e-kube-api-access-rdg7g\") pod \"neutron-operator-controller-manager-78d58447c5-4bpjq\" (UID: \"d427e158-3f69-44b8-abe3-1510fb4fdd1e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990312 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5cw\" (UniqueName: \"kubernetes.io/projected/2b2f16ec-bd97-4ff0-acf6-af298b2f3736-kube-api-access-ls5cw\") pod \"nova-operator-controller-manager-7bdb645866-pfz5s\" (UID: \"2b2f16ec-bd97-4ff0-acf6-af298b2f3736\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s98z\" (UniqueName: \"kubernetes.io/projected/7ac27e32-922a-4a46-9bb3-a3daa301dee7-kube-api-access-5s98z\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n\" (UID: \"7ac27e32-922a-4a46-9bb3-a3daa301dee7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990352 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbphz\" (UniqueName: \"kubernetes.io/projected/68c5aef7-2f00-4a28-8a25-6af0a5cd4013-kube-api-access-jbphz\") pod \"keystone-operator-controller-manager-b8b6d4659-v9wk4\" (UID: \"68c5aef7-2f00-4a28-8a25-6af0a5cd4013\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:58:21 crc kubenswrapper[4770]: I0126 18:58:21.990870 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.008517 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.009607 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.013555 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xj45g" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.017748 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqhhf\" (UniqueName: \"kubernetes.io/projected/444d3be6-b12b-4473-abff-a5e5f35af270-kube-api-access-bqhhf\") pod \"manila-operator-controller-manager-78c6999f6f-58zsz\" (UID: \"444d3be6-b12b-4473-abff-a5e5f35af270\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.031041 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbphz\" (UniqueName: \"kubernetes.io/projected/68c5aef7-2f00-4a28-8a25-6af0a5cd4013-kube-api-access-jbphz\") pod \"keystone-operator-controller-manager-b8b6d4659-v9wk4\" (UID: \"68c5aef7-2f00-4a28-8a25-6af0a5cd4013\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.031524 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.032252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdg7g\" (UniqueName: \"kubernetes.io/projected/d427e158-3f69-44b8-abe3-1510fb4fdd1e-kube-api-access-rdg7g\") pod \"neutron-operator-controller-manager-78d58447c5-4bpjq\" (UID: \"d427e158-3f69-44b8-abe3-1510fb4fdd1e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.039902 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.045441 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s98z\" (UniqueName: \"kubernetes.io/projected/7ac27e32-922a-4a46-9bb3-a3daa301dee7-kube-api-access-5s98z\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n\" (UID: \"7ac27e32-922a-4a46-9bb3-a3daa301dee7\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.064880 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.064603 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.066356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.069582 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.069737 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.075069 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ldlv7" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.091977 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b47r\" (UniqueName: \"kubernetes.io/projected/ffc82616-ae6f-4f03-9c55-c235cd7cb5ff-kube-api-access-9b47r\") pod \"octavia-operator-controller-manager-5f4cd88d46-8wtk6\" (UID: \"ffc82616-ae6f-4f03-9c55-c235cd7cb5ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092054 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7wl\" (UniqueName: \"kubernetes.io/projected/b594f7f1-d369-4dd7-8d7f-2969df165fb4-kube-api-access-bf7wl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092102 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g478s\" (UniqueName: \"kubernetes.io/projected/1fb1320e-c82f-4927-a48b-94ce5b6dcc03-kube-api-access-g478s\") pod \"swift-operator-controller-manager-547cbdb99f-6xngb\" (UID: \"1fb1320e-c82f-4927-a48b-94ce5b6dcc03\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftg8\" (UniqueName: \"kubernetes.io/projected/6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3-kube-api-access-gftg8\") pod \"ovn-operator-controller-manager-6f75f45d54-745tt\" (UID: \"6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls5cw\" (UniqueName: \"kubernetes.io/projected/2b2f16ec-bd97-4ff0-acf6-af298b2f3736-kube-api-access-ls5cw\") pod \"nova-operator-controller-manager-7bdb645866-pfz5s\" (UID: \"2b2f16ec-bd97-4ff0-acf6-af298b2f3736\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.092323 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmc9\" (UniqueName: \"kubernetes.io/projected/b6b3bfbb-893b-4122-8534-664e57faa6ce-kube-api-access-mhmc9\") pod \"placement-operator-controller-manager-79d5ccc684-gwfqm\" (UID: \"b6b3bfbb-893b-4122-8534-664e57faa6ce\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.130290 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.152759 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.153605 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.158653 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kkzcn" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.160273 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.177031 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls5cw\" (UniqueName: \"kubernetes.io/projected/2b2f16ec-bd97-4ff0-acf6-af298b2f3736-kube-api-access-ls5cw\") pod \"nova-operator-controller-manager-7bdb645866-pfz5s\" (UID: \"2b2f16ec-bd97-4ff0-acf6-af298b2f3736\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.178177 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b47r\" (UniqueName: \"kubernetes.io/projected/ffc82616-ae6f-4f03-9c55-c235cd7cb5ff-kube-api-access-9b47r\") pod \"octavia-operator-controller-manager-5f4cd88d46-8wtk6\" (UID: \"ffc82616-ae6f-4f03-9c55-c235cd7cb5ff\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftg8\" (UniqueName: \"kubernetes.io/projected/6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3-kube-api-access-gftg8\") pod \"ovn-operator-controller-manager-6f75f45d54-745tt\" (UID: \"6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208568 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208606 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmls\" (UniqueName: \"kubernetes.io/projected/752eb71a-ee7a-47da-8945-41eee7a8c6b3-kube-api-access-5fmls\") pod \"telemetry-operator-controller-manager-85cd9769bb-4vb4t\" (UID: \"752eb71a-ee7a-47da-8945-41eee7a8c6b3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208677 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhmc9\" (UniqueName: \"kubernetes.io/projected/b6b3bfbb-893b-4122-8534-664e57faa6ce-kube-api-access-mhmc9\") pod \"placement-operator-controller-manager-79d5ccc684-gwfqm\" (UID: \"b6b3bfbb-893b-4122-8534-664e57faa6ce\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf7wl\" (UniqueName: \"kubernetes.io/projected/b594f7f1-d369-4dd7-8d7f-2969df165fb4-kube-api-access-bf7wl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.208815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g478s\" (UniqueName: \"kubernetes.io/projected/1fb1320e-c82f-4927-a48b-94ce5b6dcc03-kube-api-access-g478s\") pod \"swift-operator-controller-manager-547cbdb99f-6xngb\" (UID: \"1fb1320e-c82f-4927-a48b-94ce5b6dcc03\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.209500 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.209567 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert podName:b594f7f1-d369-4dd7-8d7f-2969df165fb4 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:22.70954853 +0000 UTC m=+987.274455262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" (UID: "b594f7f1-d369-4dd7-8d7f-2969df165fb4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.252917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.261748 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.264281 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.267543 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.273682 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhmc9\" (UniqueName: \"kubernetes.io/projected/b6b3bfbb-893b-4122-8534-664e57faa6ce-kube-api-access-mhmc9\") pod \"placement-operator-controller-manager-79d5ccc684-gwfqm\" (UID: \"b6b3bfbb-893b-4122-8534-664e57faa6ce\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.278058 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jnprg" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.280751 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.288537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g478s\" (UniqueName: \"kubernetes.io/projected/1fb1320e-c82f-4927-a48b-94ce5b6dcc03-kube-api-access-g478s\") pod \"swift-operator-controller-manager-547cbdb99f-6xngb\" (UID: \"1fb1320e-c82f-4927-a48b-94ce5b6dcc03\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.289280 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.289476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftg8\" (UniqueName: \"kubernetes.io/projected/6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3-kube-api-access-gftg8\") pod \"ovn-operator-controller-manager-6f75f45d54-745tt\" (UID: \"6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.304245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf7wl\" (UniqueName: \"kubernetes.io/projected/b594f7f1-d369-4dd7-8d7f-2969df165fb4-kube-api-access-bf7wl\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.310358 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pspt\" (UniqueName: \"kubernetes.io/projected/bce0b4ae-6301-4b38-b960-13962608dab0-kube-api-access-4pspt\") pod \"test-operator-controller-manager-69797bbcbd-jllkr\" (UID: \"bce0b4ae-6301-4b38-b960-13962608dab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.310473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmls\" (UniqueName: \"kubernetes.io/projected/752eb71a-ee7a-47da-8945-41eee7a8c6b3-kube-api-access-5fmls\") pod \"telemetry-operator-controller-manager-85cd9769bb-4vb4t\" (UID: \"752eb71a-ee7a-47da-8945-41eee7a8c6b3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.318980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.345227 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.354490 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmls\" (UniqueName: \"kubernetes.io/projected/752eb71a-ee7a-47da-8945-41eee7a8c6b3-kube-api-access-5fmls\") pod \"telemetry-operator-controller-manager-85cd9769bb-4vb4t\" (UID: \"752eb71a-ee7a-47da-8945-41eee7a8c6b3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.364298 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.365408 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.370446 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.371079 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.375458 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dk7bw" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.375666 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.394653 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.400518 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.400933 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.402566 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.405373 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bnxx9" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.412306 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pspt\" (UniqueName: \"kubernetes.io/projected/bce0b4ae-6301-4b38-b960-13962608dab0-kube-api-access-4pspt\") pod \"test-operator-controller-manager-69797bbcbd-jllkr\" (UID: \"bce0b4ae-6301-4b38-b960-13962608dab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.412371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2btt\" (UniqueName: \"kubernetes.io/projected/d9a28594-7011-4810-a859-972dcde899e9-kube-api-access-c2btt\") pod \"watcher-operator-controller-manager-6bf5b95546-9qq5g\" (UID: \"d9a28594-7011-4810-a859-972dcde899e9\") " pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.412458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.412566 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.412612 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:23.412596846 +0000 UTC m=+987.977503578 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.414845 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.462484 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pspt\" (UniqueName: \"kubernetes.io/projected/bce0b4ae-6301-4b38-b960-13962608dab0-kube-api-access-4pspt\") pod \"test-operator-controller-manager-69797bbcbd-jllkr\" (UID: \"bce0b4ae-6301-4b38-b960-13962608dab0\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.514176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2btt\" (UniqueName: \"kubernetes.io/projected/d9a28594-7011-4810-a859-972dcde899e9-kube-api-access-c2btt\") pod \"watcher-operator-controller-manager-6bf5b95546-9qq5g\" (UID: \"d9a28594-7011-4810-a859-972dcde899e9\") " pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.514256 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.514280 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.514307 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkfj\" (UniqueName: \"kubernetes.io/projected/c24f34a9-cf76-44f8-8435-ff01eca67ce3-kube-api-access-dbkfj\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.514377 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zsr\" (UniqueName: \"kubernetes.io/projected/ed015d41-0a86-45bc-ac7b-410e6ef09b6e-kube-api-access-l8zsr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9fnjm\" (UID: \"ed015d41-0a86-45bc-ac7b-410e6ef09b6e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.543964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2btt\" (UniqueName: \"kubernetes.io/projected/d9a28594-7011-4810-a859-972dcde899e9-kube-api-access-c2btt\") pod \"watcher-operator-controller-manager-6bf5b95546-9qq5g\" (UID: \"d9a28594-7011-4810-a859-972dcde899e9\") " pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.616316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.616355 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.616385 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkfj\" (UniqueName: \"kubernetes.io/projected/c24f34a9-cf76-44f8-8435-ff01eca67ce3-kube-api-access-dbkfj\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.616429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8zsr\" (UniqueName: \"kubernetes.io/projected/ed015d41-0a86-45bc-ac7b-410e6ef09b6e-kube-api-access-l8zsr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9fnjm\" (UID: \"ed015d41-0a86-45bc-ac7b-410e6ef09b6e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.616499 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.616550 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:23.116536167 +0000 UTC m=+987.681442899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "metrics-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.616924 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.616956 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:23.116948558 +0000 UTC m=+987.681855280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.642416 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8zsr\" (UniqueName: \"kubernetes.io/projected/ed015d41-0a86-45bc-ac7b-410e6ef09b6e-kube-api-access-l8zsr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9fnjm\" (UID: \"ed015d41-0a86-45bc-ac7b-410e6ef09b6e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.643348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkfj\" (UniqueName: \"kubernetes.io/projected/c24f34a9-cf76-44f8-8435-ff01eca67ce3-kube-api-access-dbkfj\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.722815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.723304 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: E0126 18:58:22.723352 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert podName:b594f7f1-d369-4dd7-8d7f-2969df165fb4 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:23.723338239 +0000 UTC m=+988.288244971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" (UID: "b594f7f1-d369-4dd7-8d7f-2969df165fb4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.761005 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.761025 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:22 crc kubenswrapper[4770]: W0126 18:58:22.767979 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dfabc71_10aa_4337_a700_6dda2a4819d5.slice/crio-220e9f24c255d22ba2caf6b8806f85f51e400e5cc9c3a8dfcabe92d3cd973f9a WatchSource:0}: Error finding container 220e9f24c255d22ba2caf6b8806f85f51e400e5cc9c3a8dfcabe92d3cd973f9a: Status 404 returned error can't find the container with id 220e9f24c255d22ba2caf6b8806f85f51e400e5cc9c3a8dfcabe92d3cd973f9a Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.773621 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f"] Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.797610 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" Jan 26 18:58:22 crc kubenswrapper[4770]: I0126 18:58:22.806471 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.128888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.128938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.129050 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.129078 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.129122 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:24.129100653 +0000 UTC m=+988.694007385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "metrics-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.129138 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:24.129132054 +0000 UTC m=+988.694038786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "webhook-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.300157 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.333758 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.374795 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.398977 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc"] Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.432535 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99b8587f_51d1_4cb2_a0ab_e131c9135388.slice/crio-0fd8794219289adb8a0bf2c954f3875f668010242f7dbfe2ab05ef6fcfe3b157 WatchSource:0}: Error finding container 0fd8794219289adb8a0bf2c954f3875f668010242f7dbfe2ab05ef6fcfe3b157: Status 404 returned error can't find the container with id 0fd8794219289adb8a0bf2c954f3875f668010242f7dbfe2ab05ef6fcfe3b157 Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.434247 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.434436 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.434509 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:25.43449034 +0000 UTC m=+989.999397072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.435026 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp"] Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.439211 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68c5aef7_2f00_4a28_8a25_6af0a5cd4013.slice/crio-e3c825bce4985ebd040aec189dae3d7bc74872471b2a255819d038cd228ceec3 WatchSource:0}: Error finding container e3c825bce4985ebd040aec189dae3d7bc74872471b2a255819d038cd228ceec3: Status 404 returned error can't find the container with id e3c825bce4985ebd040aec189dae3d7bc74872471b2a255819d038cd228ceec3 Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.470773 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6"] Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.483118 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2f16ec_bd97_4ff0_acf6_af298b2f3736.slice/crio-35047561c23dfaaf53b8577a8bcb49e3bb6ad3f6499484b2135f543db9452c9c WatchSource:0}: Error finding container 35047561c23dfaaf53b8577a8bcb49e3bb6ad3f6499484b2135f543db9452c9c: Status 404 returned error can't find the container with id 35047561c23dfaaf53b8577a8bcb49e3bb6ad3f6499484b2135f543db9452c9c Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.483172 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.484204 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" event={"ID":"cc595d5d-2f69-47a8-a63f-7b4abce23fdd","Type":"ContainerStarted","Data":"25cbd2e80498116315fdbf172a33723fc097e471ad269b87b2596800327353f0"} Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.488724 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" event={"ID":"ffc82616-ae6f-4f03-9c55-c235cd7cb5ff","Type":"ContainerStarted","Data":"e8fffcccf8996391a5a5717d2a85d6e580f7ca08c8385c3267f3638a38d1d63b"} Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.500634 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqhhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-58zsz_openstack-operators(444d3be6-b12b-4473-abff-a5e5f35af270): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.501850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" event={"ID":"68c5aef7-2f00-4a28-8a25-6af0a5cd4013","Type":"ContainerStarted","Data":"e3c825bce4985ebd040aec189dae3d7bc74872471b2a255819d038cd228ceec3"} Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.501932 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" podUID="444d3be6-b12b-4473-abff-a5e5f35af270" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.512075 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh"] Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.521373 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b3bfbb_893b_4122_8534_664e57faa6ce.slice/crio-763190b289087fcde6661059e261722955f99e9ee5a21b3775b17d1f7830fbd5 WatchSource:0}: Error finding container 763190b289087fcde6661059e261722955f99e9ee5a21b3775b17d1f7830fbd5: Status 404 returned error can't find the container with id 763190b289087fcde6661059e261722955f99e9ee5a21b3775b17d1f7830fbd5 Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.521720 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" event={"ID":"1666ea4c-3865-4bc2-8741-29383616e875","Type":"ContainerStarted","Data":"4abaa0930412616009366d9cae8308b60eb704b0afcff4089ac2766e5b8abf5e"} Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.529257 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ed16ef_d3d9_47ba_aa86_3e3612a5cf6f.slice/crio-e39e042a7bb09446bcab4c9d945ac15c832b600a70c1f10da6cb9a498c656612 WatchSource:0}: Error finding container e39e042a7bb09446bcab4c9d945ac15c832b600a70c1f10da6cb9a498c656612: Status 404 returned error can't find the container with id e39e042a7bb09446bcab4c9d945ac15c832b600a70c1f10da6cb9a498c656612 Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.529251 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhmc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-gwfqm_openstack-operators(b6b3bfbb-893b-4122-8534-664e57faa6ce): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.530549 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" podUID="b6b3bfbb-893b-4122-8534-664e57faa6ce" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.531133 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" event={"ID":"7dfabc71-10aa-4337-a700-6dda2a4819d5","Type":"ContainerStarted","Data":"220e9f24c255d22ba2caf6b8806f85f51e400e5cc9c3a8dfcabe92d3cd973f9a"} Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.532332 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pspt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-jllkr_openstack-operators(bce0b4ae-6301-4b38-b960-13962608dab0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.533444 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" podUID="bce0b4ae-6301-4b38-b960-13962608dab0" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.534176 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s"] Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.536271 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mhxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-g4brh_openstack-operators(c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.540248 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" podUID="c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.540361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" event={"ID":"7ac27e32-922a-4a46-9bb3-a3daa301dee7","Type":"ContainerStarted","Data":"81695cdd2250eedf1849f39cdccf23e5c6020f0abf704ec285066fb964a8f509"} Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.541285 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.545918 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" event={"ID":"d427e158-3f69-44b8-abe3-1510fb4fdd1e","Type":"ContainerStarted","Data":"744743ebb9b036c17b580defef6c57403acbc9364f8016c3de504841e4a44488"} Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.545957 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.547955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" event={"ID":"99b8587f-51d1-4cb2-a0ab-e131c9135388","Type":"ContainerStarted","Data":"0fd8794219289adb8a0bf2c954f3875f668010242f7dbfe2ab05ef6fcfe3b157"} Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.550115 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" event={"ID":"dc15189d-c78f-475d-9a49-dac90d4d4fcb","Type":"ContainerStarted","Data":"a3da01ea750959b4c7d1cae3dc6448e4af8a10e36f18462e922171bdf19fae79"} Jan 26 18:58:23 crc kubenswrapper[4770]: W0126 18:58:23.554168 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb1320e_c82f_4927_a48b_94ce5b6dcc03.slice/crio-80c34192a00a3e28bb75d9dede454a1ac272be553eee73904f2f6555848f64a9 WatchSource:0}: Error finding container 80c34192a00a3e28bb75d9dede454a1ac272be553eee73904f2f6555848f64a9: Status 404 returned error can't find the container with id 80c34192a00a3e28bb75d9dede454a1ac272be553eee73904f2f6555848f64a9 Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.554800 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t"] Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.557835 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g478s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-6xngb_openstack-operators(1fb1320e-c82f-4927-a48b-94ce5b6dcc03): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.558919 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" podUID="1fb1320e-c82f-4927-a48b-94ce5b6dcc03" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.564496 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.569796 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.574399 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.579044 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb"] Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.619746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g"] Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.636092 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.223:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2btt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6bf5b95546-9qq5g_openstack-operators(d9a28594-7011-4810-a859-972dcde899e9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.638086 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" podUID="d9a28594-7011-4810-a859-972dcde899e9" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.715020 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm"] Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.728405 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8zsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9fnjm_openstack-operators(ed015d41-0a86-45bc-ac7b-410e6ef09b6e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.729560 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" podUID="ed015d41-0a86-45bc-ac7b-410e6ef09b6e" Jan 26 18:58:23 crc kubenswrapper[4770]: I0126 18:58:23.737655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.737840 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:23 crc kubenswrapper[4770]: E0126 18:58:23.737876 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert podName:b594f7f1-d369-4dd7-8d7f-2969df165fb4 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:25.737864813 +0000 UTC m=+990.302771545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" (UID: "b594f7f1-d369-4dd7-8d7f-2969df165fb4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.143377 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.143547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.143660 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.143736 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:26.14371949 +0000 UTC m=+990.708626222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "metrics-server-cert" not found Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.143822 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.143966 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:26.143929545 +0000 UTC m=+990.708836327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "webhook-server-cert" not found Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.586237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" event={"ID":"0e7b29c5-2473-488f-a8cf-57863472bd68","Type":"ContainerStarted","Data":"1b2b7c6af095575b1ec456768f2cba3efbad74e6b9402e70ec78e352f123ed6e"} Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.591514 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" event={"ID":"b6b3bfbb-893b-4122-8534-664e57faa6ce","Type":"ContainerStarted","Data":"763190b289087fcde6661059e261722955f99e9ee5a21b3775b17d1f7830fbd5"} Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.593122 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" event={"ID":"c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f","Type":"ContainerStarted","Data":"e39e042a7bb09446bcab4c9d945ac15c832b600a70c1f10da6cb9a498c656612"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.593607 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" podUID="b6b3bfbb-893b-4122-8534-664e57faa6ce" Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.595082 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" podUID="c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.596484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" event={"ID":"d9a28594-7011-4810-a859-972dcde899e9","Type":"ContainerStarted","Data":"f546ff4e301aa4680cbe269958a44090a580cfc4fffd596bdc8010930ecc5b5f"} Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.600180 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" event={"ID":"1fb1320e-c82f-4927-a48b-94ce5b6dcc03","Type":"ContainerStarted","Data":"80c34192a00a3e28bb75d9dede454a1ac272be553eee73904f2f6555848f64a9"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.601015 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" podUID="1fb1320e-c82f-4927-a48b-94ce5b6dcc03" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.603674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" event={"ID":"2b2f16ec-bd97-4ff0-acf6-af298b2f3736","Type":"ContainerStarted","Data":"35047561c23dfaaf53b8577a8bcb49e3bb6ad3f6499484b2135f543db9452c9c"} Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.605382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" event={"ID":"6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3","Type":"ContainerStarted","Data":"278e631a806fb83e546a3ba024005725422c223afa3f20353c42168838e255bc"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.616294 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" podUID="d9a28594-7011-4810-a859-972dcde899e9" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.620798 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" event={"ID":"bce0b4ae-6301-4b38-b960-13962608dab0","Type":"ContainerStarted","Data":"76a1ff94f4a2d824789afb2589fe4cbf70d6652c76768f4b88fb160fa184004e"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.622884 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" podUID="bce0b4ae-6301-4b38-b960-13962608dab0" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.634898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" event={"ID":"752eb71a-ee7a-47da-8945-41eee7a8c6b3","Type":"ContainerStarted","Data":"08b1241452cb20d6820a78a1d9dd89e203d18ffa3493b42dfb2db24c037a0e3b"} Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.636511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" event={"ID":"444d3be6-b12b-4473-abff-a5e5f35af270","Type":"ContainerStarted","Data":"913b61bbc37e3adb67b983b74bf7adb941497d1ae410323270d35a69e575d78c"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.638327 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" podUID="444d3be6-b12b-4473-abff-a5e5f35af270" Jan 26 18:58:24 crc kubenswrapper[4770]: I0126 18:58:24.645327 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" event={"ID":"ed015d41-0a86-45bc-ac7b-410e6ef09b6e","Type":"ContainerStarted","Data":"c9eeca0f4691c36f1963974a9473df6676805b698980dd8c3614c2a9b4dcb886"} Jan 26 18:58:24 crc kubenswrapper[4770]: E0126 18:58:24.646942 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" podUID="ed015d41-0a86-45bc-ac7b-410e6ef09b6e" Jan 26 18:58:25 crc kubenswrapper[4770]: I0126 18:58:25.483079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.483392 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.483513 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:29.483431829 +0000 UTC m=+994.048338571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.657115 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" podUID="b6b3bfbb-893b-4122-8534-664e57faa6ce" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.657851 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" podUID="c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.658934 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" podUID="d9a28594-7011-4810-a859-972dcde899e9" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.658970 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" podUID="bce0b4ae-6301-4b38-b960-13962608dab0" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.663462 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" podUID="ed015d41-0a86-45bc-ac7b-410e6ef09b6e" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.663529 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" podUID="1fb1320e-c82f-4927-a48b-94ce5b6dcc03" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.663556 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" podUID="444d3be6-b12b-4473-abff-a5e5f35af270" Jan 26 18:58:25 crc kubenswrapper[4770]: I0126 18:58:25.787661 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.787806 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:25 crc kubenswrapper[4770]: E0126 18:58:25.787862 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert podName:b594f7f1-d369-4dd7-8d7f-2969df165fb4 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:29.787847239 +0000 UTC m=+994.352753961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" (UID: "b594f7f1-d369-4dd7-8d7f-2969df165fb4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:26 crc kubenswrapper[4770]: I0126 18:58:26.194450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:26 crc kubenswrapper[4770]: I0126 18:58:26.194518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:26 crc kubenswrapper[4770]: E0126 18:58:26.194775 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:58:26 crc kubenswrapper[4770]: E0126 18:58:26.194826 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:30.194809206 +0000 UTC m=+994.759715938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "webhook-server-cert" not found Jan 26 18:58:26 crc kubenswrapper[4770]: E0126 18:58:26.195038 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:58:26 crc kubenswrapper[4770]: E0126 18:58:26.195122 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:30.195106674 +0000 UTC m=+994.760013406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "metrics-server-cert" not found Jan 26 18:58:29 crc kubenswrapper[4770]: I0126 18:58:29.545273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:29 crc kubenswrapper[4770]: E0126 18:58:29.545461 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:29 crc kubenswrapper[4770]: E0126 18:58:29.545842 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:37.5458246 +0000 UTC m=+1002.110731342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:29 crc kubenswrapper[4770]: I0126 18:58:29.850786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:29 crc kubenswrapper[4770]: E0126 18:58:29.851029 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:29 crc kubenswrapper[4770]: E0126 18:58:29.851100 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert podName:b594f7f1-d369-4dd7-8d7f-2969df165fb4 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:37.851082033 +0000 UTC m=+1002.415988765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" (UID: "b594f7f1-d369-4dd7-8d7f-2969df165fb4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 18:58:30 crc kubenswrapper[4770]: I0126 18:58:30.256378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:30 crc kubenswrapper[4770]: I0126 18:58:30.256768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:30 crc kubenswrapper[4770]: E0126 18:58:30.256570 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 18:58:30 crc kubenswrapper[4770]: E0126 18:58:30.256888 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:38.256868878 +0000 UTC m=+1002.821775610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "metrics-server-cert" not found Jan 26 18:58:30 crc kubenswrapper[4770]: E0126 18:58:30.256933 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 18:58:30 crc kubenswrapper[4770]: E0126 18:58:30.256988 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs podName:c24f34a9-cf76-44f8-8435-ff01eca67ce3 nodeName:}" failed. No retries permitted until 2026-01-26 18:58:38.256974071 +0000 UTC m=+1002.821880803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs") pod "openstack-operator-controller-manager-6796fcb5b-6wf85" (UID: "c24f34a9-cf76-44f8-8435-ff01eca67ce3") : secret "webhook-server-cert" not found Jan 26 18:58:36 crc kubenswrapper[4770]: I0126 18:58:36.769437 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 18:58:37 crc kubenswrapper[4770]: I0126 18:58:37.608316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:37 crc kubenswrapper[4770]: E0126 18:58:37.608566 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:37 crc kubenswrapper[4770]: E0126 18:58:37.609101 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert podName:462ae2ba-a49e-4eb3-9d7e-0a853412206f nodeName:}" failed. No retries permitted until 2026-01-26 18:58:53.60907018 +0000 UTC m=+1018.173976912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert") pod "infra-operator-controller-manager-694cf4f878-2tv9j" (UID: "462ae2ba-a49e-4eb3-9d7e-0a853412206f") : secret "infra-operator-webhook-server-cert" not found Jan 26 18:58:37 crc kubenswrapper[4770]: I0126 18:58:37.912979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:37 crc kubenswrapper[4770]: I0126 18:58:37.919995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b594f7f1-d369-4dd7-8d7f-2969df165fb4-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz\" (UID: \"b594f7f1-d369-4dd7-8d7f-2969df165fb4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:37 crc kubenswrapper[4770]: I0126 18:58:37.948125 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:38 crc kubenswrapper[4770]: I0126 18:58:38.320435 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:38 crc kubenswrapper[4770]: I0126 18:58:38.320668 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:38 crc kubenswrapper[4770]: I0126 18:58:38.324587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-webhook-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:38 crc kubenswrapper[4770]: I0126 18:58:38.327496 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c24f34a9-cf76-44f8-8435-ff01eca67ce3-metrics-certs\") pod \"openstack-operator-controller-manager-6796fcb5b-6wf85\" (UID: \"c24f34a9-cf76-44f8-8435-ff01eca67ce3\") " pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:38 crc kubenswrapper[4770]: I0126 18:58:38.413603 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:41 crc kubenswrapper[4770]: E0126 18:58:41.809966 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 18:58:41 crc kubenswrapper[4770]: E0126 18:58:41.810767 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q5skf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-x8m5l_openstack-operators(1666ea4c-3865-4bc2-8741-29383616e875): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:58:41 crc kubenswrapper[4770]: E0126 18:58:41.811927 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" podUID="1666ea4c-3865-4bc2-8741-29383616e875" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.217335 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.217545 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gftg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-745tt_openstack-operators(6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.218877 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" podUID="6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.805659 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" podUID="1666ea4c-3865-4bc2-8741-29383616e875" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.811410 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" podUID="6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.911087 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.911275 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhhh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-gwg5f_openstack-operators(7dfabc71-10aa-4337-a700-6dda2a4819d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:58:42 crc kubenswrapper[4770]: E0126 18:58:42.912429 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" podUID="7dfabc71-10aa-4337-a700-6dda2a4819d5" Jan 26 18:58:43 crc kubenswrapper[4770]: E0126 18:58:43.508637 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 26 18:58:43 crc kubenswrapper[4770]: E0126 18:58:43.508845 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ls5cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-pfz5s_openstack-operators(2b2f16ec-bd97-4ff0-acf6-af298b2f3736): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:58:43 crc kubenswrapper[4770]: E0126 18:58:43.510064 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" podUID="2b2f16ec-bd97-4ff0-acf6-af298b2f3736" Jan 26 18:58:43 crc kubenswrapper[4770]: E0126 18:58:43.811367 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" podUID="2b2f16ec-bd97-4ff0-acf6-af298b2f3736" Jan 26 18:58:43 crc kubenswrapper[4770]: E0126 18:58:43.811523 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" podUID="7dfabc71-10aa-4337-a700-6dda2a4819d5" Jan 26 18:58:44 crc kubenswrapper[4770]: E0126 18:58:44.477819 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 18:58:44 crc kubenswrapper[4770]: E0126 18:58:44.478093 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbphz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-v9wk4_openstack-operators(68c5aef7-2f00-4a28-8a25-6af0a5cd4013): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:58:44 crc kubenswrapper[4770]: E0126 18:58:44.479288 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" podUID="68c5aef7-2f00-4a28-8a25-6af0a5cd4013" Jan 26 18:58:44 crc kubenswrapper[4770]: E0126 18:58:44.821111 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" podUID="68c5aef7-2f00-4a28-8a25-6af0a5cd4013" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.657046 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz"] Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.819020 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85"] Jan 26 18:58:48 crc kubenswrapper[4770]: W0126 18:58:48.825823 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc24f34a9_cf76_44f8_8435_ff01eca67ce3.slice/crio-39af1482a7e92cef932d766658fceca66a4400469efeb9db4ad62bbd0af76ab7 WatchSource:0}: Error finding container 39af1482a7e92cef932d766658fceca66a4400469efeb9db4ad62bbd0af76ab7: Status 404 returned error can't find the container with id 39af1482a7e92cef932d766658fceca66a4400469efeb9db4ad62bbd0af76ab7 Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.881896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" event={"ID":"752eb71a-ee7a-47da-8945-41eee7a8c6b3","Type":"ContainerStarted","Data":"7a70859217b10dd4f3be184d12c98bc634df1bd0c70ddc3110ba6050088425a9"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.884997 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.931709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" event={"ID":"d427e158-3f69-44b8-abe3-1510fb4fdd1e","Type":"ContainerStarted","Data":"21011c21d41f35c8c40ebc670306eb5db848d57e212af18be6b68a0a4da4977a"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.932717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.933028 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" podStartSLOduration=6.409068496 podStartE2EDuration="27.933018022s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.498335575 +0000 UTC m=+988.063242307" lastFinishedPulling="2026-01-26 18:58:45.022285091 +0000 UTC m=+1009.587191833" observedRunningTime="2026-01-26 18:58:48.923326058 +0000 UTC m=+1013.488232790" watchObservedRunningTime="2026-01-26 18:58:48.933018022 +0000 UTC m=+1013.497924754" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.942790 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" event={"ID":"7ac27e32-922a-4a46-9bb3-a3daa301dee7","Type":"ContainerStarted","Data":"a65cfe468f7097a41702b5b65e7bdecc5921757b99843d745d86961fa7f213b1"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.942873 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.948985 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.950357 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.964059 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" podStartSLOduration=6.27747432 podStartE2EDuration="27.964036644s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.336025165 +0000 UTC m=+987.900931897" lastFinishedPulling="2026-01-26 18:58:45.022587489 +0000 UTC m=+1009.587494221" observedRunningTime="2026-01-26 18:58:48.961049713 +0000 UTC m=+1013.525956445" watchObservedRunningTime="2026-01-26 18:58:48.964036644 +0000 UTC m=+1013.528943376" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.966879 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" event={"ID":"1fb1320e-c82f-4927-a48b-94ce5b6dcc03","Type":"ContainerStarted","Data":"103ed6b297c818f3fbfdf5d3c073b02265a48a7750cd49bc7ace0be5f862ffb7"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.967631 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.975058 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" event={"ID":"c24f34a9-cf76-44f8-8435-ff01eca67ce3","Type":"ContainerStarted","Data":"39af1482a7e92cef932d766658fceca66a4400469efeb9db4ad62bbd0af76ab7"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.984410 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" podStartSLOduration=6.959117389 podStartE2EDuration="27.984396317s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.409307506 +0000 UTC m=+987.974214238" lastFinishedPulling="2026-01-26 18:58:44.434586444 +0000 UTC m=+1008.999493166" observedRunningTime="2026-01-26 18:58:48.981811686 +0000 UTC m=+1013.546718418" watchObservedRunningTime="2026-01-26 18:58:48.984396317 +0000 UTC m=+1013.549303039" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.986109 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" event={"ID":"b594f7f1-d369-4dd7-8d7f-2969df165fb4","Type":"ContainerStarted","Data":"5eb9749b5a06908b4f5d4c9e7b9c42a0c475bfae6899b50a3fe95358447d33ce"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.989667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" event={"ID":"ffc82616-ae6f-4f03-9c55-c235cd7cb5ff","Type":"ContainerStarted","Data":"1a320b21884536297e57b638f074105858112eaf36110002bd9224125f759478"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.990635 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.995201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" event={"ID":"99b8587f-51d1-4cb2-a0ab-e131c9135388","Type":"ContainerStarted","Data":"18a5a812a54aed04618f8560a23f0a60f5af19b6567c671cc04510096621f650"} Jan 26 18:58:48 crc kubenswrapper[4770]: I0126 18:58:48.996271 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.009819 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" event={"ID":"b6b3bfbb-893b-4122-8534-664e57faa6ce","Type":"ContainerStarted","Data":"a1af68d9420b9ccf37b02ec30e2197880138884258aa66c1a07b366c2582a92b"} Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.010428 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.011595 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" event={"ID":"0e7b29c5-2473-488f-a8cf-57863472bd68","Type":"ContainerStarted","Data":"9f1aed6d97fe321d463eda962d829a65b010304e25aab4dd749a9c90b74979af"} Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.011994 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.020129 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" podStartSLOduration=6.385772203 podStartE2EDuration="28.020111078s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.387185495 +0000 UTC m=+987.952092227" lastFinishedPulling="2026-01-26 18:58:45.02152436 +0000 UTC m=+1009.586431102" observedRunningTime="2026-01-26 18:58:49.012050578 +0000 UTC m=+1013.576957310" watchObservedRunningTime="2026-01-26 18:58:49.020111078 +0000 UTC m=+1013.585017810" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.040844 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" podStartSLOduration=6.448271393 podStartE2EDuration="28.040823321s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.429517466 +0000 UTC m=+987.994424198" lastFinishedPulling="2026-01-26 18:58:45.022069394 +0000 UTC m=+1009.586976126" observedRunningTime="2026-01-26 18:58:49.033960744 +0000 UTC m=+1013.598867476" watchObservedRunningTime="2026-01-26 18:58:49.040823321 +0000 UTC m=+1013.605730063" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.060935 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" podStartSLOduration=7.102479705 podStartE2EDuration="28.060919166s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.476238905 +0000 UTC m=+988.041145637" lastFinishedPulling="2026-01-26 18:58:44.434678356 +0000 UTC m=+1008.999585098" observedRunningTime="2026-01-26 18:58:49.049687971 +0000 UTC m=+1013.614594703" watchObservedRunningTime="2026-01-26 18:58:49.060919166 +0000 UTC m=+1013.625825888" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.084040 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" podStartSLOduration=3.255387873 podStartE2EDuration="28.084023853s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.529121612 +0000 UTC m=+988.094028344" lastFinishedPulling="2026-01-26 18:58:48.357757602 +0000 UTC m=+1012.922664324" observedRunningTime="2026-01-26 18:58:49.081092694 +0000 UTC m=+1013.645999426" watchObservedRunningTime="2026-01-26 18:58:49.084023853 +0000 UTC m=+1013.648930585" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.098817 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" podStartSLOduration=3.296774158 podStartE2EDuration="28.098797035s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.557654517 +0000 UTC m=+988.122561249" lastFinishedPulling="2026-01-26 18:58:48.359677394 +0000 UTC m=+1012.924584126" observedRunningTime="2026-01-26 18:58:49.094059317 +0000 UTC m=+1013.658966049" watchObservedRunningTime="2026-01-26 18:58:49.098797035 +0000 UTC m=+1013.663703767" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.236687 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" podStartSLOduration=7.786217191 podStartE2EDuration="28.236669601s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.540289085 +0000 UTC m=+988.105195817" lastFinishedPulling="2026-01-26 18:58:43.990741495 +0000 UTC m=+1008.555648227" observedRunningTime="2026-01-26 18:58:49.138919466 +0000 UTC m=+1013.703826198" watchObservedRunningTime="2026-01-26 18:58:49.236669601 +0000 UTC m=+1013.801576333" Jan 26 18:58:49 crc kubenswrapper[4770]: I0126 18:58:49.237020 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" podStartSLOduration=7.2697914820000005 podStartE2EDuration="28.237015881s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.467388435 +0000 UTC m=+988.032295167" lastFinishedPulling="2026-01-26 18:58:44.434612824 +0000 UTC m=+1008.999519566" observedRunningTime="2026-01-26 18:58:49.236943539 +0000 UTC m=+1013.801850261" watchObservedRunningTime="2026-01-26 18:58:49.237015881 +0000 UTC m=+1013.801922613" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.025678 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" event={"ID":"c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f","Type":"ContainerStarted","Data":"f1bc446022ae2ffa11113d418ccc24b68ec81c5deb6dea3dea4d5c792ec309f2"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.026124 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.027642 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" event={"ID":"ed015d41-0a86-45bc-ac7b-410e6ef09b6e","Type":"ContainerStarted","Data":"25e3da26f77f8b3c84ca0dfbb9f2d6cbaf5d583d4ca538812905f29633ad567d"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.032839 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" event={"ID":"d9a28594-7011-4810-a859-972dcde899e9","Type":"ContainerStarted","Data":"8c3c9e3cf0906d918d7121b75597cd2117de0079d24adc447aa79b29766f74f5"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.033227 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.035687 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" event={"ID":"bce0b4ae-6301-4b38-b960-13962608dab0","Type":"ContainerStarted","Data":"bcfc198b9dae6e140d40769a5d044f35875adb1b28c59407f01a5e0d0c00a2d6"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.036394 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.039630 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" event={"ID":"c24f34a9-cf76-44f8-8435-ff01eca67ce3","Type":"ContainerStarted","Data":"8e29cfed9aecb6d95df4e928f6c6f859c8b301a6c39deff0ada719b615671913"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.040206 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.044391 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" podStartSLOduration=4.220150234 podStartE2EDuration="29.044378765s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.535531915 +0000 UTC m=+988.100438647" lastFinishedPulling="2026-01-26 18:58:48.359760446 +0000 UTC m=+1012.924667178" observedRunningTime="2026-01-26 18:58:50.04307745 +0000 UTC m=+1014.607984182" watchObservedRunningTime="2026-01-26 18:58:50.044378765 +0000 UTC m=+1014.609285497" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.046146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" event={"ID":"cc595d5d-2f69-47a8-a63f-7b4abce23fdd","Type":"ContainerStarted","Data":"384e40037e93ff40654bdc13ac957fcd5edad68953e8e609be79e0a44e34eaf4"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.047882 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" event={"ID":"dc15189d-c78f-475d-9a49-dac90d4d4fcb","Type":"ContainerStarted","Data":"13587dd73a272d8dd65eeee0987eb8b8fabadbd46c7d28d22135dc3b147c54e2"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.050296 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" event={"ID":"444d3be6-b12b-4473-abff-a5e5f35af270","Type":"ContainerStarted","Data":"56bbbed8c5b85f7ab0de9001a46c3dc7a7a843b4c87bdd9356546e29f293bec6"} Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.050590 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.059968 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" podStartSLOduration=3.241945427 podStartE2EDuration="28.059946008s" podCreationTimestamp="2026-01-26 18:58:22 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.635941204 +0000 UTC m=+988.200847936" lastFinishedPulling="2026-01-26 18:58:48.453941795 +0000 UTC m=+1013.018848517" observedRunningTime="2026-01-26 18:58:50.05591919 +0000 UTC m=+1014.620825942" watchObservedRunningTime="2026-01-26 18:58:50.059946008 +0000 UTC m=+1014.624852740" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.086185 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" podStartSLOduration=4.258665341 podStartE2EDuration="29.086163181s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.532247496 +0000 UTC m=+988.097154228" lastFinishedPulling="2026-01-26 18:58:48.359745336 +0000 UTC m=+1012.924652068" observedRunningTime="2026-01-26 18:58:50.069528579 +0000 UTC m=+1014.634435311" watchObservedRunningTime="2026-01-26 18:58:50.086163181 +0000 UTC m=+1014.651069913" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.110272 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" podStartSLOduration=28.110252635 podStartE2EDuration="28.110252635s" podCreationTimestamp="2026-01-26 18:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:58:50.104245552 +0000 UTC m=+1014.669152294" watchObservedRunningTime="2026-01-26 18:58:50.110252635 +0000 UTC m=+1014.675159367" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.149644 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" podStartSLOduration=4.290881647 podStartE2EDuration="29.149630636s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.500452912 +0000 UTC m=+988.065359644" lastFinishedPulling="2026-01-26 18:58:48.359201861 +0000 UTC m=+1012.924108633" observedRunningTime="2026-01-26 18:58:50.145215235 +0000 UTC m=+1014.710121967" watchObservedRunningTime="2026-01-26 18:58:50.149630636 +0000 UTC m=+1014.714537368" Jan 26 18:58:50 crc kubenswrapper[4770]: I0126 18:58:50.149966 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9fnjm" podStartSLOduration=3.431383564 podStartE2EDuration="28.149960364s" podCreationTimestamp="2026-01-26 18:58:22 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.728305673 +0000 UTC m=+988.293212405" lastFinishedPulling="2026-01-26 18:58:48.446882473 +0000 UTC m=+1013.011789205" observedRunningTime="2026-01-26 18:58:50.127916075 +0000 UTC m=+1014.692822807" watchObservedRunningTime="2026-01-26 18:58:50.149960364 +0000 UTC m=+1014.714867096" Jan 26 18:58:52 crc kubenswrapper[4770]: I0126 18:58:52.082116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" event={"ID":"b594f7f1-d369-4dd7-8d7f-2969df165fb4","Type":"ContainerStarted","Data":"457cd65577495b5ee7a990aba7e42f5a21a1f884f8326dfbee091a3a71d01fd7"} Jan 26 18:58:52 crc kubenswrapper[4770]: I0126 18:58:52.082939 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:52 crc kubenswrapper[4770]: I0126 18:58:52.118555 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" podStartSLOduration=28.508834396 podStartE2EDuration="31.118530169s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:48.671526927 +0000 UTC m=+1013.236433659" lastFinishedPulling="2026-01-26 18:58:51.2812227 +0000 UTC m=+1015.846129432" observedRunningTime="2026-01-26 18:58:52.111530219 +0000 UTC m=+1016.676437021" watchObservedRunningTime="2026-01-26 18:58:52.118530169 +0000 UTC m=+1016.683436911" Jan 26 18:58:53 crc kubenswrapper[4770]: I0126 18:58:53.672779 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:53 crc kubenswrapper[4770]: I0126 18:58:53.681602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/462ae2ba-a49e-4eb3-9d7e-0a853412206f-cert\") pod \"infra-operator-controller-manager-694cf4f878-2tv9j\" (UID: \"462ae2ba-a49e-4eb3-9d7e-0a853412206f\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:53 crc kubenswrapper[4770]: I0126 18:58:53.815382 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:54 crc kubenswrapper[4770]: I0126 18:58:54.260166 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j"] Jan 26 18:58:54 crc kubenswrapper[4770]: W0126 18:58:54.280681 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod462ae2ba_a49e_4eb3_9d7e_0a853412206f.slice/crio-bb95cca990c4407b1329ad95d58b7c7332560ec26d3f572c2b427b2c105abeb0 WatchSource:0}: Error finding container bb95cca990c4407b1329ad95d58b7c7332560ec26d3f572c2b427b2c105abeb0: Status 404 returned error can't find the container with id bb95cca990c4407b1329ad95d58b7c7332560ec26d3f572c2b427b2c105abeb0 Jan 26 18:58:55 crc kubenswrapper[4770]: I0126 18:58:55.109634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" event={"ID":"6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3","Type":"ContainerStarted","Data":"c6a5010025977d237b8c3ed483aafc88b9cf36ec347c3177777c6fdee8f51e47"} Jan 26 18:58:55 crc kubenswrapper[4770]: I0126 18:58:55.110145 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:58:55 crc kubenswrapper[4770]: I0126 18:58:55.111174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" event={"ID":"462ae2ba-a49e-4eb3-9d7e-0a853412206f","Type":"ContainerStarted","Data":"bb95cca990c4407b1329ad95d58b7c7332560ec26d3f572c2b427b2c105abeb0"} Jan 26 18:58:55 crc kubenswrapper[4770]: I0126 18:58:55.131176 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" podStartSLOduration=3.339647702 podStartE2EDuration="34.131157268s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.491988912 +0000 UTC m=+988.056895644" lastFinishedPulling="2026-01-26 18:58:54.283498478 +0000 UTC m=+1018.848405210" observedRunningTime="2026-01-26 18:58:55.127466779 +0000 UTC m=+1019.692373511" watchObservedRunningTime="2026-01-26 18:58:55.131157268 +0000 UTC m=+1019.696064000" Jan 26 18:58:56 crc kubenswrapper[4770]: I0126 18:58:56.123665 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" event={"ID":"462ae2ba-a49e-4eb3-9d7e-0a853412206f","Type":"ContainerStarted","Data":"b23885df3be1b9760c558ce06901c58b252a578ab89fe6a78a0b3eb770cea9d3"} Jan 26 18:58:56 crc kubenswrapper[4770]: I0126 18:58:56.124407 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:58:56 crc kubenswrapper[4770]: I0126 18:58:56.145918 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" podStartSLOduration=33.528579407 podStartE2EDuration="35.145896848s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:54.2846481 +0000 UTC m=+1018.849554832" lastFinishedPulling="2026-01-26 18:58:55.901965541 +0000 UTC m=+1020.466872273" observedRunningTime="2026-01-26 18:58:56.144942362 +0000 UTC m=+1020.709849124" watchObservedRunningTime="2026-01-26 18:58:56.145896848 +0000 UTC m=+1020.710803600" Jan 26 18:58:57 crc kubenswrapper[4770]: I0126 18:58:57.955586 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz" Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.136268 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" event={"ID":"1666ea4c-3865-4bc2-8741-29383616e875","Type":"ContainerStarted","Data":"86f7263532406ff28af74bf589e3886202fb578d52a427cdfd66f160c9ac2785"} Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.137101 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.138821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" event={"ID":"2b2f16ec-bd97-4ff0-acf6-af298b2f3736","Type":"ContainerStarted","Data":"623ebc362a1d460ade429952f2960e8a248bd4d8226d486559955d4f3778a0f9"} Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.139027 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.154473 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" podStartSLOduration=2.699383298 podStartE2EDuration="37.154455119s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:22.766211474 +0000 UTC m=+987.331118206" lastFinishedPulling="2026-01-26 18:58:57.221283255 +0000 UTC m=+1021.786190027" observedRunningTime="2026-01-26 18:58:58.152902877 +0000 UTC m=+1022.717809639" watchObservedRunningTime="2026-01-26 18:58:58.154455119 +0000 UTC m=+1022.719361851" Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.169235 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" podStartSLOduration=3.511230364 podStartE2EDuration="37.169215329s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.486862034 +0000 UTC m=+988.051768766" lastFinishedPulling="2026-01-26 18:58:57.144846989 +0000 UTC m=+1021.709753731" observedRunningTime="2026-01-26 18:58:58.166256719 +0000 UTC m=+1022.731163461" watchObservedRunningTime="2026-01-26 18:58:58.169215329 +0000 UTC m=+1022.734122061" Jan 26 18:58:58 crc kubenswrapper[4770]: I0126 18:58:58.419678 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6796fcb5b-6wf85" Jan 26 18:58:59 crc kubenswrapper[4770]: I0126 18:58:59.146238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" event={"ID":"7dfabc71-10aa-4337-a700-6dda2a4819d5","Type":"ContainerStarted","Data":"8f5f9b4e36af98801f5329f19d4f688189eedf2064f34da4b5656c1726f22f54"} Jan 26 18:58:59 crc kubenswrapper[4770]: I0126 18:58:59.146744 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:58:59 crc kubenswrapper[4770]: I0126 18:58:59.161334 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" podStartSLOduration=2.603638777 podStartE2EDuration="38.161312494s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:22.79807987 +0000 UTC m=+987.362986602" lastFinishedPulling="2026-01-26 18:58:58.355753587 +0000 UTC m=+1022.920660319" observedRunningTime="2026-01-26 18:58:59.159185606 +0000 UTC m=+1023.724092358" watchObservedRunningTime="2026-01-26 18:58:59.161312494 +0000 UTC m=+1023.726219236" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.161544 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" event={"ID":"68c5aef7-2f00-4a28-8a25-6af0a5cd4013","Type":"ContainerStarted","Data":"bc5767693a341fa1b5ab09bbf15105b1995961b4e4f1aabfb5dd4d5eff5c278d"} Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.162021 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.177157 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" podStartSLOduration=3.385571491 podStartE2EDuration="40.177141622s" podCreationTimestamp="2026-01-26 18:58:21 +0000 UTC" firstStartedPulling="2026-01-26 18:58:23.441544422 +0000 UTC m=+988.006451154" lastFinishedPulling="2026-01-26 18:59:00.233114553 +0000 UTC m=+1024.798021285" observedRunningTime="2026-01-26 18:59:01.174455519 +0000 UTC m=+1025.739362251" watchObservedRunningTime="2026-01-26 18:59:01.177141622 +0000 UTC m=+1025.742048354" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.869546 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-g9nzc" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.928930 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-h2zrp" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.943350 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4brh" Jan 26 18:59:01 crc kubenswrapper[4770]: I0126 18:59:01.961783 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-zn9m9" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.033564 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jg69w" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.068309 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-58zsz" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.133994 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.256371 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-4bpjq" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.271638 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-pfz5s" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.295049 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-8wtk6" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.329630 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-745tt" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.367424 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-gwfqm" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.372719 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6xngb" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.404532 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4vb4t" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.763838 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-jllkr" Jan 26 18:59:02 crc kubenswrapper[4770]: I0126 18:59:02.809617 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6bf5b95546-9qq5g" Jan 26 18:59:03 crc kubenswrapper[4770]: I0126 18:59:03.823465 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-2tv9j" Jan 26 18:59:11 crc kubenswrapper[4770]: I0126 18:59:11.844542 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-x8m5l" Jan 26 18:59:11 crc kubenswrapper[4770]: I0126 18:59:11.880842 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gwg5f" Jan 26 18:59:12 crc kubenswrapper[4770]: I0126 18:59:12.069185 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-v9wk4" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.218779 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.220305 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.230323 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-d7lk4" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.230441 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.230645 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.231040 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.231069 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.236741 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.330476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxm76\" (UniqueName: \"kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.331258 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.331389 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.432705 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.432983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.433129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxm76\" (UniqueName: \"kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.434147 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.434382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.458100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxm76\" (UniqueName: \"kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76\") pod \"dnsmasq-dns-7768b46857-pxgm8\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:36 crc kubenswrapper[4770]: I0126 18:59:36.543012 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:37 crc kubenswrapper[4770]: I0126 18:59:37.046533 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:37 crc kubenswrapper[4770]: I0126 18:59:37.428474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" event={"ID":"318c8209-0a19-4f09-b6c7-0f68f3ce971e","Type":"ContainerStarted","Data":"a53bf6bcabd4f7623237ff01fb04bb3e3bf7a2efd11ffbaf3b9fd4bf5000909a"} Jan 26 18:59:39 crc kubenswrapper[4770]: I0126 18:59:39.996545 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.005617 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.027746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.190397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.190457 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.190500 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzzz\" (UniqueName: \"kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.291215 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.291528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfzzz\" (UniqueName: \"kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.291654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.291717 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.292613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.293621 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.335501 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfzzz\" (UniqueName: \"kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz\") pod \"dnsmasq-dns-584684c95c-bpdl5\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.343049 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.345054 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.346467 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.358976 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.495598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.495642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhkj\" (UniqueName: \"kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.495991 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.597711 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.597761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhkj\" (UniqueName: \"kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.597835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.598670 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.599194 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.604339 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.632902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhkj\" (UniqueName: \"kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj\") pod \"dnsmasq-dns-55fd79c85c-tzzhs\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.639784 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.640853 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.665658 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.672249 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.800641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.800843 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.800896 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jm8t\" (UniqueName: \"kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.901711 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.902006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jm8t\" (UniqueName: \"kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.902107 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.903582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.904384 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.919845 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jm8t\" (UniqueName: \"kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t\") pod \"dnsmasq-dns-57f7fc7997-g2g48\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:40 crc kubenswrapper[4770]: I0126 18:59:40.968332 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.183884 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.185344 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190225 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190257 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190474 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190505 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190594 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sw2ks" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190629 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.190679 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.230164 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308000 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308080 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308124 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308145 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308188 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khplx\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308259 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308280 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308305 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.308447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.409969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410059 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410092 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khplx\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410142 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410502 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410741 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410776 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.410836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.411157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.411217 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.411476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.411921 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.412586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.414253 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.416882 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.417325 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.427595 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khplx\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.430506 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.433725 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.454864 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.458629 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.462842 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.463099 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.463275 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.464055 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.464234 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.464459 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.464816 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sm5gm" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.478060 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.516216 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.613984 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8ldz\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614033 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614075 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614159 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614183 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614221 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614250 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.614273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.715924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.715996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.716033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.716086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.716132 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.716134 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.717106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.717867 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.717987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.718042 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.718085 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8ldz\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.718123 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.718215 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.718287 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.719280 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.720279 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.721548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.722667 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.724518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.725676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.733446 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8ldz\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.734206 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.739875 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.758171 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.760179 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.764475 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.764897 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.765070 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.765194 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.765297 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.765713 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-5xjkk" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.765872 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.785585 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.812688 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.921815 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.921887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.921941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p6n\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-kube-api-access-g4p6n\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.921963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.921992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922042 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922159 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:41 crc kubenswrapper[4770]: I0126 18:59:41.922336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025800 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025904 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025960 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.025979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.026012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.026045 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.026070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.026101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p6n\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-kube-api-access-g4p6n\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.027483 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.028519 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.029918 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.030313 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.030769 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.031082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.034442 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.035523 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.038008 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.052764 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.054815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p6n\" (UniqueName: \"kubernetes.io/projected/7e3d608a-c9d7-4a29-b45a-0c175851fdbc-kube-api-access-g4p6n\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.062802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"7e3d608a-c9d7-4a29-b45a-0c175851fdbc\") " pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:42 crc kubenswrapper[4770]: I0126 18:59:42.107314 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.132428 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.133937 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.140164 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xp6dg" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.140334 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.140817 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.144494 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.148002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.161422 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243253 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243357 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243401 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243442 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243469 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njjl\" (UniqueName: \"kubernetes.io/projected/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kube-api-access-2njjl\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.243594 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.344728 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.344828 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.344895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.344983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.345022 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.345079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njjl\" (UniqueName: \"kubernetes.io/projected/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kube-api-access-2njjl\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.345121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.345166 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.346221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.347308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kolla-config\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.347527 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.348679 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-config-data-default\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.350776 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e620ef2b-6951-4c91-8517-c35e07ee8a2a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.352603 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.363725 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e620ef2b-6951-4c91-8517-c35e07ee8a2a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.370217 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njjl\" (UniqueName: \"kubernetes.io/projected/e620ef2b-6951-4c91-8517-c35e07ee8a2a-kube-api-access-2njjl\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.384049 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"e620ef2b-6951-4c91-8517-c35e07ee8a2a\") " pod="openstack/openstack-galera-0" Jan 26 18:59:43 crc kubenswrapper[4770]: I0126 18:59:43.478868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.635763 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.637862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.643785 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.647195 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.647401 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4fxz7" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.647814 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.648812 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776153 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776209 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxg8\" (UniqueName: \"kubernetes.io/projected/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kube-api-access-ttxg8\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.776777 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878477 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878497 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxg8\" (UniqueName: \"kubernetes.io/projected/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kube-api-access-ttxg8\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878601 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.878618 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.879003 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.879366 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.879491 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.879731 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.880329 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.890769 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.905377 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.942144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:44 crc kubenswrapper[4770]: I0126 18:59:44.967182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxg8\" (UniqueName: \"kubernetes.io/projected/5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b-kube-api-access-ttxg8\") pod \"openstack-cell1-galera-0\" (UID: \"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b\") " pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.004676 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.006117 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.010497 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.010676 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kb5ft" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.010826 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.027364 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.082014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.082087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-kolla-config\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.082140 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-config-data\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.082163 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.082187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zmn\" (UniqueName: \"kubernetes.io/projected/eacb7365-d724-4d52-96c8-edb12977e1f3-kube-api-access-j6zmn\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.183617 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-config-data\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.183662 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.183678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zmn\" (UniqueName: \"kubernetes.io/projected/eacb7365-d724-4d52-96c8-edb12977e1f3-kube-api-access-j6zmn\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.183782 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.183824 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-kolla-config\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.184532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-kolla-config\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.184628 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eacb7365-d724-4d52-96c8-edb12977e1f3-config-data\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.186869 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.186880 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eacb7365-d724-4d52-96c8-edb12977e1f3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.199216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zmn\" (UniqueName: \"kubernetes.io/projected/eacb7365-d724-4d52-96c8-edb12977e1f3-kube-api-access-j6zmn\") pod \"memcached-0\" (UID: \"eacb7365-d724-4d52-96c8-edb12977e1f3\") " pod="openstack/memcached-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.265519 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 18:59:45 crc kubenswrapper[4770]: I0126 18:59:45.335482 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 18:59:46 crc kubenswrapper[4770]: I0126 18:59:46.832297 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:59:46 crc kubenswrapper[4770]: I0126 18:59:46.833861 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:59:46 crc kubenswrapper[4770]: I0126 18:59:46.836397 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2m764" Jan 26 18:59:46 crc kubenswrapper[4770]: I0126 18:59:46.854186 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:59:46 crc kubenswrapper[4770]: I0126 18:59:46.910863 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hhm\" (UniqueName: \"kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm\") pod \"kube-state-metrics-0\" (UID: \"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0\") " pod="openstack/kube-state-metrics-0" Jan 26 18:59:47 crc kubenswrapper[4770]: I0126 18:59:47.012683 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hhm\" (UniqueName: \"kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm\") pod \"kube-state-metrics-0\" (UID: \"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0\") " pod="openstack/kube-state-metrics-0" Jan 26 18:59:47 crc kubenswrapper[4770]: I0126 18:59:47.039744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hhm\" (UniqueName: \"kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm\") pod \"kube-state-metrics-0\" (UID: \"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0\") " pod="openstack/kube-state-metrics-0" Jan 26 18:59:47 crc kubenswrapper[4770]: I0126 18:59:47.163502 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.129940 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.132715 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.138028 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.138388 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.138643 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.138810 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.138934 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.139063 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.139119 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.139670 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gqjgz" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.144241 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236409 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236532 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236613 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236689 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236837 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236891 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hxpr\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.236945 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.328916 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338428 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338497 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hxpr\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338596 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338648 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338671 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.338711 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.340832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.341433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.341952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.343981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.343980 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.344830 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.348718 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.353600 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.353637 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bce0a61bb2b9f961be74694fe5f6cf0aff9e298c0837c7d91488158ec6fad94/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.356580 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.357662 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hxpr\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.379593 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:48 crc kubenswrapper[4770]: I0126 18:59:48.471603 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.634714 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hgfvf"] Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.636042 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.638263 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-g79f6" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.638790 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.638963 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.655294 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hgfvf"] Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.691656 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dtdfk"] Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.693413 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.701578 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dtdfk"] Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777424 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2095b9-c866-4424-aa95-31718bd65d61-scripts\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777492 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-combined-ca-bundle\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqhd\" (UniqueName: \"kubernetes.io/projected/9d2095b9-c866-4424-aa95-31718bd65d61-kube-api-access-9fqhd\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777545 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkcnk\" (UniqueName: \"kubernetes.io/projected/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-kube-api-access-vkcnk\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-scripts\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-ovn-controller-tls-certs\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.777749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-lib\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.778057 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-log-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.778100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-log\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.778135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-etc-ovs\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.778151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-run\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.879857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-scripts\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.879980 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880017 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-ovn-controller-tls-certs\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-lib\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-log-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880179 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-log\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880214 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-etc-ovs\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880236 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-run\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2095b9-c866-4424-aa95-31718bd65d61-scripts\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880353 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-combined-ca-bundle\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqhd\" (UniqueName: \"kubernetes.io/projected/9d2095b9-c866-4424-aa95-31718bd65d61-kube-api-access-9fqhd\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkcnk\" (UniqueName: \"kubernetes.io/projected/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-kube-api-access-vkcnk\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880473 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880609 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-lib\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880716 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-run\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880768 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-run-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880859 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-var-log\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.880896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-etc-ovs\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.883136 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-scripts\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.888178 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d2095b9-c866-4424-aa95-31718bd65d61-scripts\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.893563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-combined-ca-bundle\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.898945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d2095b9-c866-4424-aa95-31718bd65d61-var-log-ovn\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.899278 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d2095b9-c866-4424-aa95-31718bd65d61-ovn-controller-tls-certs\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.903215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqhd\" (UniqueName: \"kubernetes.io/projected/9d2095b9-c866-4424-aa95-31718bd65d61-kube-api-access-9fqhd\") pod \"ovn-controller-hgfvf\" (UID: \"9d2095b9-c866-4424-aa95-31718bd65d61\") " pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.906032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkcnk\" (UniqueName: \"kubernetes.io/projected/48d5e8ce-0771-4ca8-9879-6ba39cd217a4-kube-api-access-vkcnk\") pod \"ovn-controller-ovs-dtdfk\" (UID: \"48d5e8ce-0771-4ca8-9879-6ba39cd217a4\") " pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:50 crc kubenswrapper[4770]: I0126 18:59:50.955425 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.008619 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.031070 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.032405 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.040274 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.040496 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.040818 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.040997 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-j9pkz" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.041065 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.052480 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083082 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083158 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083193 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083221 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083323 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083374 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkg9l\" (UniqueName: \"kubernetes.io/projected/3b42faa6-0359-44d0-96ea-7264ab250ba4-kube-api-access-jkg9l\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.083608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.185968 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186095 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkg9l\" (UniqueName: \"kubernetes.io/projected/3b42faa6-0359-44d0-96ea-7264ab250ba4-kube-api-access-jkg9l\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186269 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186395 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186435 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186487 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186730 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.186800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.187074 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.187664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b42faa6-0359-44d0-96ea-7264ab250ba4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.191016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.191949 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.193546 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b42faa6-0359-44d0-96ea-7264ab250ba4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.208561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkg9l\" (UniqueName: \"kubernetes.io/projected/3b42faa6-0359-44d0-96ea-7264ab250ba4-kube-api-access-jkg9l\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.214266 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b42faa6-0359-44d0-96ea-7264ab250ba4\") " pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:51 crc kubenswrapper[4770]: I0126 18:59:51.408974 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 18:59:52 crc kubenswrapper[4770]: W0126 18:59:52.911052 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9887ef0_63fd_44fa_863c_4af8873efcaf.slice/crio-7793dfea335340d67aff836885fba11c1af74d0647af73406bc5982788370b5d WatchSource:0}: Error finding container 7793dfea335340d67aff836885fba11c1af74d0647af73406bc5982788370b5d: Status 404 returned error can't find the container with id 7793dfea335340d67aff836885fba11c1af74d0647af73406bc5982788370b5d Jan 26 18:59:53 crc kubenswrapper[4770]: I0126 18:59:53.379856 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 18:59:53 crc kubenswrapper[4770]: I0126 18:59:53.438403 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 26 18:59:53 crc kubenswrapper[4770]: I0126 18:59:53.575660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" event={"ID":"c9887ef0-63fd-44fa-863c-4af8873efcaf","Type":"ContainerStarted","Data":"7793dfea335340d67aff836885fba11c1af74d0647af73406bc5982788370b5d"} Jan 26 18:59:53 crc kubenswrapper[4770]: W0126 18:59:53.834292 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e3d608a_c9d7_4a29_b45a_0c175851fdbc.slice/crio-ffcb8324f3ecc6d60776c71caf7b1a56d5c02973e0245db8c01bfe7fd864d009 WatchSource:0}: Error finding container ffcb8324f3ecc6d60776c71caf7b1a56d5c02973e0245db8c01bfe7fd864d009: Status 404 returned error can't find the container with id ffcb8324f3ecc6d60776c71caf7b1a56d5c02973e0245db8c01bfe7fd864d009 Jan 26 18:59:53 crc kubenswrapper[4770]: E0126 18:59:53.841667 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 18:59:53 crc kubenswrapper[4770]: E0126 18:59:53.841719 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 26 18:59:53 crc kubenswrapper[4770]: E0126 18:59:53.841831 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.223:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxm76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7768b46857-pxgm8_openstack(318c8209-0a19-4f09-b6c7-0f68f3ce971e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 18:59:53 crc kubenswrapper[4770]: E0126 18:59:53.842945 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" podUID="318c8209-0a19-4f09-b6c7-0f68f3ce971e" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.450831 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.458372 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.558045 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.562318 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.567533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.567823 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.568168 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.568216 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tg4jw" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.586082 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.598250 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e620ef2b-6951-4c91-8517-c35e07ee8a2a","Type":"ContainerStarted","Data":"52a79fb3fc06d6d893084a9d76609a6346d155ada9ecb8eba75f137ba48154e9"} Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.599984 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7e3d608a-c9d7-4a29-b45a-0c175851fdbc","Type":"ContainerStarted","Data":"ffcb8324f3ecc6d60776c71caf7b1a56d5c02973e0245db8c01bfe7fd864d009"} Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.601919 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" event={"ID":"05a727c9-a964-4a43-b1ae-8fc566f92253","Type":"ContainerStarted","Data":"71af2553bd267875e207f345b79cac880f8f7ec7b372afc239e75dfd9272aa12"} Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.603755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerStarted","Data":"3ef9a1a2c9a1a10cf1c9bff02dd4997460ae80a0f4b16ef987567a9de8166e20"} Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.606033 4770 generic.go:334] "Generic (PLEG): container finished" podID="c9887ef0-63fd-44fa-863c-4af8873efcaf" containerID="a393acc64955cc59966eb39d9a1bdaf44348ce08d8ed6df301da1e2f087fbfb1" exitCode=0 Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.606271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" event={"ID":"c9887ef0-63fd-44fa-863c-4af8873efcaf","Type":"ContainerDied","Data":"a393acc64955cc59966eb39d9a1bdaf44348ce08d8ed6df301da1e2f087fbfb1"} Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758195 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn852\" (UniqueName: \"kubernetes.io/projected/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-kube-api-access-xn852\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758306 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-config\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758342 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758359 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758390 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758440 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.758463 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.859967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-config\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860058 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860149 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860228 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.860256 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn852\" (UniqueName: \"kubernetes.io/projected/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-kube-api-access-xn852\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.861292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.861469 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-config\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.862210 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.862296 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.870582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.873076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.874501 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.911727 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn852\" (UniqueName: \"kubernetes.io/projected/23527c1a-fd08-4cc7-a6b7-48fe3988ac6e-kube-api-access-xn852\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:54 crc kubenswrapper[4770]: I0126 18:59:54.934937 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e\") " pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.173158 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.178085 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfzzz\" (UniqueName: \"kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz\") pod \"c9887ef0-63fd-44fa-863c-4af8873efcaf\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.181396 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc\") pod \"c9887ef0-63fd-44fa-863c-4af8873efcaf\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.181445 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config\") pod \"c9887ef0-63fd-44fa-863c-4af8873efcaf\" (UID: \"c9887ef0-63fd-44fa-863c-4af8873efcaf\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.182771 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz" (OuterVolumeSpecName: "kube-api-access-cfzzz") pod "c9887ef0-63fd-44fa-863c-4af8873efcaf" (UID: "c9887ef0-63fd-44fa-863c-4af8873efcaf"). InnerVolumeSpecName "kube-api-access-cfzzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.183153 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfzzz\" (UniqueName: \"kubernetes.io/projected/c9887ef0-63fd-44fa-863c-4af8873efcaf-kube-api-access-cfzzz\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.201465 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.203830 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.209335 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config" (OuterVolumeSpecName: "config") pod "c9887ef0-63fd-44fa-863c-4af8873efcaf" (UID: "c9887ef0-63fd-44fa-863c-4af8873efcaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.216773 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9887ef0-63fd-44fa-863c-4af8873efcaf" (UID: "c9887ef0-63fd-44fa-863c-4af8873efcaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.218161 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.238405 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeacb7365_d724_4d52_96c8_edb12977e1f3.slice/crio-0008ffbe885e41cdeeaf1c947b1367cea76413bf493c33892576ae8b588ce925 WatchSource:0}: Error finding container 0008ffbe885e41cdeeaf1c947b1367cea76413bf493c33892576ae8b588ce925: Status 404 returned error can't find the container with id 0008ffbe885e41cdeeaf1c947b1367cea76413bf493c33892576ae8b588ce925 Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.250243 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.261381 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod176a0205_a131_4510_bcf5_420945c4c6ee.slice/crio-3e53adaec4c0db3bc794e5187e52e4216b957b357539a42fcd690cf59579c327 WatchSource:0}: Error finding container 3e53adaec4c0db3bc794e5187e52e4216b957b357539a42fcd690cf59579c327: Status 404 returned error can't find the container with id 3e53adaec4c0db3bc794e5187e52e4216b957b357539a42fcd690cf59579c327 Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.278550 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod809b98d0_f155_4506_8dd3_e0cb6c3a6ff0.slice/crio-49c5d55ddda65faab15ff450bca3172692d101f07aea0640d874e4dfca12fc9c WatchSource:0}: Error finding container 49c5d55ddda65faab15ff450bca3172692d101f07aea0640d874e4dfca12fc9c: Status 404 returned error can't find the container with id 49c5d55ddda65faab15ff450bca3172692d101f07aea0640d874e4dfca12fc9c Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.284493 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxm76\" (UniqueName: \"kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76\") pod \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.284734 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc\") pod \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.284996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config\") pod \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\" (UID: \"318c8209-0a19-4f09-b6c7-0f68f3ce971e\") " Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.285196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "318c8209-0a19-4f09-b6c7-0f68f3ce971e" (UID: "318c8209-0a19-4f09-b6c7-0f68f3ce971e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.286234 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.286439 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.286577 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9887ef0-63fd-44fa-863c-4af8873efcaf-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.287070 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config" (OuterVolumeSpecName: "config") pod "318c8209-0a19-4f09-b6c7-0f68f3ce971e" (UID: "318c8209-0a19-4f09-b6c7-0f68f3ce971e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.287549 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.294712 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76" (OuterVolumeSpecName: "kube-api-access-gxm76") pod "318c8209-0a19-4f09-b6c7-0f68f3ce971e" (UID: "318c8209-0a19-4f09-b6c7-0f68f3ce971e"). InnerVolumeSpecName "kube-api-access-gxm76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.312318 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.354317 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.357458 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b42faa6_0359_44d0_96ea_7264ab250ba4.slice/crio-253b300ecc85827e56db66b46a48e25062dcad6e984add2dc4fc3f6232f26afd WatchSource:0}: Error finding container 253b300ecc85827e56db66b46a48e25062dcad6e984add2dc4fc3f6232f26afd: Status 404 returned error can't find the container with id 253b300ecc85827e56db66b46a48e25062dcad6e984add2dc4fc3f6232f26afd Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.375100 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.389233 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8209-0a19-4f09-b6c7-0f68f3ce971e-config\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.389269 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxm76\" (UniqueName: \"kubernetes.io/projected/318c8209-0a19-4f09-b6c7-0f68f3ce971e-kube-api-access-gxm76\") on node \"crc\" DevicePath \"\"" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.434234 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dtdfk"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.447102 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48d5e8ce_0771_4ca8_9879_6ba39cd217a4.slice/crio-b3bca3bcd4677f326078cb314cfc07d24ef05bc011648b586fc0e57162db56fa WatchSource:0}: Error finding container b3bca3bcd4677f326078cb314cfc07d24ef05bc011648b586fc0e57162db56fa: Status 404 returned error can't find the container with id b3bca3bcd4677f326078cb314cfc07d24ef05bc011648b586fc0e57162db56fa Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.470540 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hgfvf"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.483560 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2095b9_c866_4424_aa95_31718bd65d61.slice/crio-fad104ec8d09f3e802e13e18ba7eeb8246337ad644d5c624c69b0414687c3267 WatchSource:0}: Error finding container fad104ec8d09f3e802e13e18ba7eeb8246337ad644d5c624c69b0414687c3267: Status 404 returned error can't find the container with id fad104ec8d09f3e802e13e18ba7eeb8246337ad644d5c624c69b0414687c3267 Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.487079 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 18:59:55 crc kubenswrapper[4770]: W0126 18:59:55.497282 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d267c82_de7b_48b9_98f5_66d78067778d.slice/crio-3f9babc1954ccb6dbad9de9b64c33670d6d7aadf621204e444397014aac18fdb WatchSource:0}: Error finding container 3f9babc1954ccb6dbad9de9b64c33670d6d7aadf621204e444397014aac18fdb: Status 404 returned error can't find the container with id 3f9babc1954ccb6dbad9de9b64c33670d6d7aadf621204e444397014aac18fdb Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.623473 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf" event={"ID":"9d2095b9-c866-4424-aa95-31718bd65d61","Type":"ContainerStarted","Data":"fad104ec8d09f3e802e13e18ba7eeb8246337ad644d5c624c69b0414687c3267"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.629092 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" event={"ID":"318c8209-0a19-4f09-b6c7-0f68f3ce971e","Type":"ContainerDied","Data":"a53bf6bcabd4f7623237ff01fb04bb3e3bf7a2efd11ffbaf3b9fd4bf5000909a"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.629110 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7768b46857-pxgm8" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.632090 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0","Type":"ContainerStarted","Data":"49c5d55ddda65faab15ff450bca3172692d101f07aea0640d874e4dfca12fc9c"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.633902 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerStarted","Data":"3e53adaec4c0db3bc794e5187e52e4216b957b357539a42fcd690cf59579c327"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.636373 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.636375 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-584684c95c-bpdl5" event={"ID":"c9887ef0-63fd-44fa-863c-4af8873efcaf","Type":"ContainerDied","Data":"7793dfea335340d67aff836885fba11c1af74d0647af73406bc5982788370b5d"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.636453 4770 scope.go:117] "RemoveContainer" containerID="a393acc64955cc59966eb39d9a1bdaf44348ce08d8ed6df301da1e2f087fbfb1" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.644501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dtdfk" event={"ID":"48d5e8ce-0771-4ca8-9879-6ba39cd217a4","Type":"ContainerStarted","Data":"b3bca3bcd4677f326078cb314cfc07d24ef05bc011648b586fc0e57162db56fa"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.647997 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerStarted","Data":"3f9babc1954ccb6dbad9de9b64c33670d6d7aadf621204e444397014aac18fdb"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.652350 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eacb7365-d724-4d52-96c8-edb12977e1f3","Type":"ContainerStarted","Data":"0008ffbe885e41cdeeaf1c947b1367cea76413bf493c33892576ae8b588ce925"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.668246 4770 generic.go:334] "Generic (PLEG): container finished" podID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerID="6e2c0eaaa3386f0f145ec900edada4c5d6f2b0a0a8a3f890f28d05f6bf1adbea" exitCode=0 Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.668368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" event={"ID":"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03","Type":"ContainerDied","Data":"6e2c0eaaa3386f0f145ec900edada4c5d6f2b0a0a8a3f890f28d05f6bf1adbea"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.668402 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" event={"ID":"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03","Type":"ContainerStarted","Data":"42b973d4b257b9d2196f6fcaebcb8d0c3685dbd21713624f7bd70219fb9c7533"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.701682 4770 generic.go:334] "Generic (PLEG): container finished" podID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerID="2d7b76a9522f1387c87031087326f4d69827cd7754b840c8baddd9632b8c1d8c" exitCode=0 Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.701765 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" event={"ID":"05a727c9-a964-4a43-b1ae-8fc566f92253","Type":"ContainerDied","Data":"2d7b76a9522f1387c87031087326f4d69827cd7754b840c8baddd9632b8c1d8c"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.705423 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b42faa6-0359-44d0-96ea-7264ab250ba4","Type":"ContainerStarted","Data":"253b300ecc85827e56db66b46a48e25062dcad6e984add2dc4fc3f6232f26afd"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.707227 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.707859 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b","Type":"ContainerStarted","Data":"38ec2aedfff32143aad20965270a01904d1becee3b16dbe15a5a22c25f82d4e4"} Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.715255 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7768b46857-pxgm8"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.848879 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318c8209-0a19-4f09-b6c7-0f68f3ce971e" path="/var/lib/kubelet/pods/318c8209-0a19-4f09-b6c7-0f68f3ce971e/volumes" Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.849363 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.849406 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-584684c95c-bpdl5"] Jan 26 18:59:55 crc kubenswrapper[4770]: I0126 18:59:55.849433 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.717208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" event={"ID":"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03","Type":"ContainerStarted","Data":"346e66a16768ab0d284a8249fff60ec5d642dcc17c074678b924cccff334b0cf"} Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.717571 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.720860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" event={"ID":"05a727c9-a964-4a43-b1ae-8fc566f92253","Type":"ContainerStarted","Data":"bfe2d6694c0c7148d7aa41ec16468042a68d460faab8883d93aeec0e504c593b"} Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.721000 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.723517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e","Type":"ContainerStarted","Data":"e77e23879b61597847073c1c129225b89f7e5cfeefb343df00504b430d50b52d"} Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.744182 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" podStartSLOduration=16.744163821 podStartE2EDuration="16.744163821s" podCreationTimestamp="2026-01-26 18:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:59:56.738138888 +0000 UTC m=+1081.303045620" watchObservedRunningTime="2026-01-26 18:59:56.744163821 +0000 UTC m=+1081.309070553" Jan 26 18:59:56 crc kubenswrapper[4770]: I0126 18:59:56.760175 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" podStartSLOduration=16.760158646 podStartE2EDuration="16.760158646s" podCreationTimestamp="2026-01-26 18:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 18:59:56.753732881 +0000 UTC m=+1081.318639613" watchObservedRunningTime="2026-01-26 18:59:56.760158646 +0000 UTC m=+1081.325065378" Jan 26 18:59:57 crc kubenswrapper[4770]: I0126 18:59:57.783488 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9887ef0-63fd-44fa-863c-4af8873efcaf" path="/var/lib/kubelet/pods/c9887ef0-63fd-44fa-863c-4af8873efcaf/volumes" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.155922 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt"] Jan 26 19:00:00 crc kubenswrapper[4770]: E0126 19:00:00.156552 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9887ef0-63fd-44fa-863c-4af8873efcaf" containerName="init" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.156564 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9887ef0-63fd-44fa-863c-4af8873efcaf" containerName="init" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.156734 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9887ef0-63fd-44fa-863c-4af8873efcaf" containerName="init" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.157372 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.159333 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.161277 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.181380 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.181556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.181582 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7bz\" (UniqueName: \"kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.181712 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt"] Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.283450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.283598 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.283627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7bz\" (UniqueName: \"kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.284915 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.290617 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.302221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7bz\" (UniqueName: \"kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz\") pod \"collect-profiles-29490900-cc5rt\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:00 crc kubenswrapper[4770]: I0126 19:00:00.497114 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.938364 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t59c4"] Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.939526 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.942447 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.957287 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t59c4"] Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.957581 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ff64ab-79f6-4941-8de7-b9edbea8439d-config\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.957667 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.957821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovn-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.957884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovs-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.958073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-combined-ca-bundle\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:03 crc kubenswrapper[4770]: I0126 19:00:03.958182 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkpf\" (UniqueName: \"kubernetes.io/projected/d9ff64ab-79f6-4941-8de7-b9edbea8439d-kube-api-access-2lkpf\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.059619 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovn-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.059670 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovs-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060034 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovs-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d9ff64ab-79f6-4941-8de7-b9edbea8439d-ovn-rundir\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-combined-ca-bundle\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkpf\" (UniqueName: \"kubernetes.io/projected/d9ff64ab-79f6-4941-8de7-b9edbea8439d-kube-api-access-2lkpf\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ff64ab-79f6-4941-8de7-b9edbea8439d-config\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.060270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.061022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ff64ab-79f6-4941-8de7-b9edbea8439d-config\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.067387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-combined-ca-bundle\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.068217 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ff64ab-79f6-4941-8de7-b9edbea8439d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.078760 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkpf\" (UniqueName: \"kubernetes.io/projected/d9ff64ab-79f6-4941-8de7-b9edbea8439d-kube-api-access-2lkpf\") pod \"ovn-controller-metrics-t59c4\" (UID: \"d9ff64ab-79f6-4941-8de7-b9edbea8439d\") " pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.089844 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.090083 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" containerID="cri-o://bfe2d6694c0c7148d7aa41ec16468042a68d460faab8883d93aeec0e504c593b" gracePeriod=10 Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.098314 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.125094 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.128285 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.137614 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.185783 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.185828 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.185912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.186064 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xgr\" (UniqueName: \"kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.199826 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.263951 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t59c4" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.294924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xgr\" (UniqueName: \"kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.295051 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.295085 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.295169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.296310 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.296454 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.297125 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.298910 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.299194 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" containerID="cri-o://346e66a16768ab0d284a8249fff60ec5d642dcc17c074678b924cccff334b0cf" gracePeriod=10 Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.302164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.323767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xgr\" (UniqueName: \"kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr\") pod \"dnsmasq-dns-56965dc457-dqf6b\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.336559 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.338376 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.343148 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.375191 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.396996 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.397047 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlx7t\" (UniqueName: \"kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.397087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.397121 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.397149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.497351 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.498401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.498450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlx7t\" (UniqueName: \"kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.498494 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.498536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.498570 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.499664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.500050 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.500488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.500765 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.520144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlx7t\" (UniqueName: \"kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t\") pod \"dnsmasq-dns-5f7dc4f659-x5dd2\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:04 crc kubenswrapper[4770]: I0126 19:00:04.678208 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:05 crc kubenswrapper[4770]: I0126 19:00:05.673552 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Jan 26 19:00:05 crc kubenswrapper[4770]: I0126 19:00:05.969412 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: connect: connection refused" Jan 26 19:00:10 crc kubenswrapper[4770]: I0126 19:00:10.706248 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Jan 26 19:00:10 crc kubenswrapper[4770]: I0126 19:00:10.850309 4770 generic.go:334] "Generic (PLEG): container finished" podID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerID="346e66a16768ab0d284a8249fff60ec5d642dcc17c074678b924cccff334b0cf" exitCode=0 Jan 26 19:00:10 crc kubenswrapper[4770]: I0126 19:00:10.850359 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" event={"ID":"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03","Type":"ContainerDied","Data":"346e66a16768ab0d284a8249fff60ec5d642dcc17c074678b924cccff334b0cf"} Jan 26 19:00:10 crc kubenswrapper[4770]: I0126 19:00:10.853676 4770 generic.go:334] "Generic (PLEG): container finished" podID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerID="bfe2d6694c0c7148d7aa41ec16468042a68d460faab8883d93aeec0e504c593b" exitCode=0 Jan 26 19:00:10 crc kubenswrapper[4770]: I0126 19:00:10.853742 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" event={"ID":"05a727c9-a964-4a43-b1ae-8fc566f92253","Type":"ContainerDied","Data":"bfe2d6694c0c7148d7aa41ec16468042a68d460faab8883d93aeec0e504c593b"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.098938 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.114074 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131093 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khhkj\" (UniqueName: \"kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj\") pod \"05a727c9-a964-4a43-b1ae-8fc566f92253\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131151 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config\") pod \"05a727c9-a964-4a43-b1ae-8fc566f92253\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131228 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc\") pod \"05a727c9-a964-4a43-b1ae-8fc566f92253\" (UID: \"05a727c9-a964-4a43-b1ae-8fc566f92253\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131245 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc\") pod \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131318 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config\") pod \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.131362 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jm8t\" (UniqueName: \"kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t\") pod \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\" (UID: \"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03\") " Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.146989 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t" (OuterVolumeSpecName: "kube-api-access-7jm8t") pod "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" (UID: "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03"). InnerVolumeSpecName "kube-api-access-7jm8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.149683 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj" (OuterVolumeSpecName: "kube-api-access-khhkj") pod "05a727c9-a964-4a43-b1ae-8fc566f92253" (UID: "05a727c9-a964-4a43-b1ae-8fc566f92253"). InnerVolumeSpecName "kube-api-access-khhkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.180625 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config" (OuterVolumeSpecName: "config") pod "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" (UID: "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.185002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config" (OuterVolumeSpecName: "config") pod "05a727c9-a964-4a43-b1ae-8fc566f92253" (UID: "05a727c9-a964-4a43-b1ae-8fc566f92253"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.186485 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" (UID: "ca4fc6b9-67f6-4fb5-8caf-043e122a1d03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.189563 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05a727c9-a964-4a43-b1ae-8fc566f92253" (UID: "05a727c9-a964-4a43-b1ae-8fc566f92253"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.233809 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.234082 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jm8t\" (UniqueName: \"kubernetes.io/projected/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-kube-api-access-7jm8t\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.234095 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khhkj\" (UniqueName: \"kubernetes.io/projected/05a727c9-a964-4a43-b1ae-8fc566f92253-kube-api-access-khhkj\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.234104 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.234112 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05a727c9-a964-4a43-b1ae-8fc566f92253-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.234122 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.368824 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.392728 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.520029 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t59c4"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.529787 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.864146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" event={"ID":"6377faf9-1047-4fe9-a2b8-816f0213cde0","Type":"ContainerStarted","Data":"d8f300380dee15e2d45a2b1a070da94af7ad845d6ce0efb60ee7b03d39cbae27"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.867154 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dtdfk" event={"ID":"48d5e8ce-0771-4ca8-9879-6ba39cd217a4","Type":"ContainerStarted","Data":"53289c4d0ca80ba1646469893f7026a0ea34c77250c6f7972de69df5c35ffe8d"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.870558 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eacb7365-d724-4d52-96c8-edb12977e1f3","Type":"ContainerStarted","Data":"e6e5a0356e65449e08b3b5ddff9d2c6bea2aa1d974933a0614a5b204c1a7c3c6"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.871316 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.877759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" event={"ID":"ca4fc6b9-67f6-4fb5-8caf-043e122a1d03","Type":"ContainerDied","Data":"42b973d4b257b9d2196f6fcaebcb8d0c3685dbd21713624f7bd70219fb9c7533"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.877804 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.877824 4770 scope.go:117] "RemoveContainer" containerID="346e66a16768ab0d284a8249fff60ec5d642dcc17c074678b924cccff334b0cf" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.881346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" event={"ID":"e12683bf-103c-40f3-997c-4b44d151def9","Type":"ContainerStarted","Data":"cadad39ab60402991461dc58a9b177437e3fa5454b60922886535d791f4f68a0"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.883265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e620ef2b-6951-4c91-8517-c35e07ee8a2a","Type":"ContainerStarted","Data":"13311631a18f39c60984dc3d13a0e107b4bb65b171eade987a0ffe382c464e77"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.888259 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" event={"ID":"05a727c9-a964-4a43-b1ae-8fc566f92253","Type":"ContainerDied","Data":"71af2553bd267875e207f345b79cac880f8f7ec7b372afc239e75dfd9272aa12"} Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.888426 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fd79c85c-tzzhs" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.940596 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.783996342 podStartE2EDuration="27.940434737s" podCreationTimestamp="2026-01-26 18:59:44 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.260436039 +0000 UTC m=+1079.825342771" lastFinishedPulling="2026-01-26 19:00:10.416874434 +0000 UTC m=+1094.981781166" observedRunningTime="2026-01-26 19:00:11.930264541 +0000 UTC m=+1096.495171323" watchObservedRunningTime="2026-01-26 19:00:11.940434737 +0000 UTC m=+1096.505341659" Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.957228 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.965133 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57f7fc7997-g2g48"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.972523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 19:00:11 crc kubenswrapper[4770]: I0126 19:00:11.982435 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fd79c85c-tzzhs"] Jan 26 19:00:12 crc kubenswrapper[4770]: E0126 19:00:12.012906 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 19:00:12 crc kubenswrapper[4770]: E0126 19:00:12.012957 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 19:00:12 crc kubenswrapper[4770]: E0126 19:00:12.013155 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9hhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(809b98d0-f155-4506-8dd3-e0cb6c3a6ff0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 19:00:12 crc kubenswrapper[4770]: E0126 19:00:12.015179 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.061104 4770 scope.go:117] "RemoveContainer" containerID="6e2c0eaaa3386f0f145ec900edada4c5d6f2b0a0a8a3f890f28d05f6bf1adbea" Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.319883 4770 scope.go:117] "RemoveContainer" containerID="bfe2d6694c0c7148d7aa41ec16468042a68d460faab8883d93aeec0e504c593b" Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.339505 4770 scope.go:117] "RemoveContainer" containerID="2d7b76a9522f1387c87031087326f4d69827cd7754b840c8baddd9632b8c1d8c" Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.905404 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" event={"ID":"de4cef66-1301-4ef9-bac5-f416e92ef9e5","Type":"ContainerStarted","Data":"593e82b7740c3eb4dd07accc699d6218a651a0f2ad06e7a4374e397beb6f7d12"} Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.906906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t59c4" event={"ID":"d9ff64ab-79f6-4941-8de7-b9edbea8439d","Type":"ContainerStarted","Data":"ae8ed703c5e5643d8f5573f0902dc313189a6b79c0a34951a8ff8d3920c60292"} Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.909975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerStarted","Data":"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454"} Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.911791 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b","Type":"ContainerStarted","Data":"71a16b42364b5406d3cd5e0920a7991c1cac3c848891379f8e840e5114c478a7"} Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.914486 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf" event={"ID":"9d2095b9-c866-4424-aa95-31718bd65d61","Type":"ContainerStarted","Data":"8cf39ff99daa7e52243f8a389d5d2528ff5a1077a541cd55f6408b8c252dd52a"} Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.914628 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hgfvf" Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.917645 4770 generic.go:334] "Generic (PLEG): container finished" podID="48d5e8ce-0771-4ca8-9879-6ba39cd217a4" containerID="53289c4d0ca80ba1646469893f7026a0ea34c77250c6f7972de69df5c35ffe8d" exitCode=0 Jan 26 19:00:12 crc kubenswrapper[4770]: I0126 19:00:12.917733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dtdfk" event={"ID":"48d5e8ce-0771-4ca8-9879-6ba39cd217a4","Type":"ContainerDied","Data":"53289c4d0ca80ba1646469893f7026a0ea34c77250c6f7972de69df5c35ffe8d"} Jan 26 19:00:12 crc kubenswrapper[4770]: E0126 19:00:12.921212 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.022291 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hgfvf" podStartSLOduration=7.71213378 podStartE2EDuration="23.02226313s" podCreationTimestamp="2026-01-26 18:59:50 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.485938866 +0000 UTC m=+1080.050845598" lastFinishedPulling="2026-01-26 19:00:10.796068216 +0000 UTC m=+1095.360974948" observedRunningTime="2026-01-26 19:00:13.013753799 +0000 UTC m=+1097.578660551" watchObservedRunningTime="2026-01-26 19:00:13.02226313 +0000 UTC m=+1097.587169862" Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.777384 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" path="/var/lib/kubelet/pods/05a727c9-a964-4a43-b1ae-8fc566f92253/volumes" Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.778373 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" path="/var/lib/kubelet/pods/ca4fc6b9-67f6-4fb5-8caf-043e122a1d03/volumes" Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.929801 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b42faa6-0359-44d0-96ea-7264ab250ba4","Type":"ContainerStarted","Data":"c71c028cf3bcfec1ce7f9e9c032f0ed2897134e2270b182e619a9c5acdda6b0d"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.934787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e","Type":"ContainerStarted","Data":"d6695cf4a63a0211663f3d792758377d7a4898e83f825a3a4a22f40ddad4f884"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.936988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerStarted","Data":"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.943137 4770 generic.go:334] "Generic (PLEG): container finished" podID="6377faf9-1047-4fe9-a2b8-816f0213cde0" containerID="7bc7ac62957dc5d3243f189fa968d4076845aa518ea5c71b04934269bd6f52b6" exitCode=0 Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.943229 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" event={"ID":"6377faf9-1047-4fe9-a2b8-816f0213cde0","Type":"ContainerDied","Data":"7bc7ac62957dc5d3243f189fa968d4076845aa518ea5c71b04934269bd6f52b6"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.947126 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dtdfk" event={"ID":"48d5e8ce-0771-4ca8-9879-6ba39cd217a4","Type":"ContainerStarted","Data":"d54ea7a5e94cffe731d8444b8be453053c1ffb22db0bea2a6ec29440efb7f051"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.950876 4770 generic.go:334] "Generic (PLEG): container finished" podID="e12683bf-103c-40f3-997c-4b44d151def9" containerID="ae5f6278b806c7ba1381f68d8b924a55bce5929ffa5f022d28a9a632c7d40e15" exitCode=0 Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.950956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" event={"ID":"e12683bf-103c-40f3-997c-4b44d151def9","Type":"ContainerDied","Data":"ae5f6278b806c7ba1381f68d8b924a55bce5929ffa5f022d28a9a632c7d40e15"} Jan 26 19:00:13 crc kubenswrapper[4770]: I0126 19:00:13.958247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7e3d608a-c9d7-4a29-b45a-0c175851fdbc","Type":"ContainerStarted","Data":"068bfa0e96ae78b6bff1f8efff78f9f17ffb6d1a412b4ca564ca833336e71fc8"} Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.967214 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dtdfk" event={"ID":"48d5e8ce-0771-4ca8-9879-6ba39cd217a4","Type":"ContainerStarted","Data":"df6339ab531531aa28a96082b930ba958f3963dd2d7f3a71501c3c42b2573b37"} Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.967739 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.969820 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerStarted","Data":"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d"} Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.972256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" event={"ID":"e12683bf-103c-40f3-997c-4b44d151def9","Type":"ContainerStarted","Data":"70e5bc6e691e1f4f21f0c96c841110911bab2b82fa21f89faf26b5ac45347ee6"} Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.972933 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.974963 4770 generic.go:334] "Generic (PLEG): container finished" podID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerID="be515865a0784c8013b9d6c76d4465db58e2a20629c5d03d10d484377d15b8ec" exitCode=0 Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.975063 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" event={"ID":"de4cef66-1301-4ef9-bac5-f416e92ef9e5","Type":"ContainerDied","Data":"be515865a0784c8013b9d6c76d4465db58e2a20629c5d03d10d484377d15b8ec"} Jan 26 19:00:14 crc kubenswrapper[4770]: I0126 19:00:14.987419 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dtdfk" podStartSLOduration=9.848243766 podStartE2EDuration="24.987404961s" podCreationTimestamp="2026-01-26 18:59:50 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.453821024 +0000 UTC m=+1080.018727756" lastFinishedPulling="2026-01-26 19:00:10.592982229 +0000 UTC m=+1095.157888951" observedRunningTime="2026-01-26 19:00:14.984230944 +0000 UTC m=+1099.549137666" watchObservedRunningTime="2026-01-26 19:00:14.987404961 +0000 UTC m=+1099.552311693" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.044446 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" podStartSLOduration=11.0444322 podStartE2EDuration="11.0444322s" podCreationTimestamp="2026-01-26 19:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:15.044088691 +0000 UTC m=+1099.608995423" watchObservedRunningTime="2026-01-26 19:00:15.0444322 +0000 UTC m=+1099.609338932" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.776755 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.823982 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw7bz\" (UniqueName: \"kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz\") pod \"6377faf9-1047-4fe9-a2b8-816f0213cde0\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.824104 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume\") pod \"6377faf9-1047-4fe9-a2b8-816f0213cde0\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.824154 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume\") pod \"6377faf9-1047-4fe9-a2b8-816f0213cde0\" (UID: \"6377faf9-1047-4fe9-a2b8-816f0213cde0\") " Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.830047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume" (OuterVolumeSpecName: "config-volume") pod "6377faf9-1047-4fe9-a2b8-816f0213cde0" (UID: "6377faf9-1047-4fe9-a2b8-816f0213cde0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.838166 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6377faf9-1047-4fe9-a2b8-816f0213cde0" (UID: "6377faf9-1047-4fe9-a2b8-816f0213cde0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.838213 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz" (OuterVolumeSpecName: "kube-api-access-xw7bz") pod "6377faf9-1047-4fe9-a2b8-816f0213cde0" (UID: "6377faf9-1047-4fe9-a2b8-816f0213cde0"). InnerVolumeSpecName "kube-api-access-xw7bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.926488 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw7bz\" (UniqueName: \"kubernetes.io/projected/6377faf9-1047-4fe9-a2b8-816f0213cde0-kube-api-access-xw7bz\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.926539 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6377faf9-1047-4fe9-a2b8-816f0213cde0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.926554 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6377faf9-1047-4fe9-a2b8-816f0213cde0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.969031 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57f7fc7997-g2g48" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: i/o timeout" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.983840 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" event={"ID":"6377faf9-1047-4fe9-a2b8-816f0213cde0","Type":"ContainerDied","Data":"d8f300380dee15e2d45a2b1a070da94af7ad845d6ce0efb60ee7b03d39cbae27"} Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.983882 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f300380dee15e2d45a2b1a070da94af7ad845d6ce0efb60ee7b03d39cbae27" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.983897 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt" Jan 26 19:00:15 crc kubenswrapper[4770]: I0126 19:00:15.984502 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 19:00:16 crc kubenswrapper[4770]: I0126 19:00:16.993713 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" event={"ID":"de4cef66-1301-4ef9-bac5-f416e92ef9e5","Type":"ContainerStarted","Data":"b9a9c12526ec35d9a1d5d376000973cb6db5a4083045e8eda474385f1c7a76ed"} Jan 26 19:00:16 crc kubenswrapper[4770]: I0126 19:00:16.994079 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:16 crc kubenswrapper[4770]: I0126 19:00:16.995419 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t59c4" event={"ID":"d9ff64ab-79f6-4941-8de7-b9edbea8439d","Type":"ContainerStarted","Data":"d7c3d2a19e12358a10ab82ed33955054a1e52ebf049b1d869417afb4412edc72"} Jan 26 19:00:16 crc kubenswrapper[4770]: I0126 19:00:16.997560 4770 generic.go:334] "Generic (PLEG): container finished" podID="e620ef2b-6951-4c91-8517-c35e07ee8a2a" containerID="13311631a18f39c60984dc3d13a0e107b4bb65b171eade987a0ffe382c464e77" exitCode=0 Jan 26 19:00:16 crc kubenswrapper[4770]: I0126 19:00:16.997651 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e620ef2b-6951-4c91-8517-c35e07ee8a2a","Type":"ContainerDied","Data":"13311631a18f39c60984dc3d13a0e107b4bb65b171eade987a0ffe382c464e77"} Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.001764 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b42faa6-0359-44d0-96ea-7264ab250ba4","Type":"ContainerStarted","Data":"bd223aad4458f9b7624ee9e35c6c7a976dc40ee5c1fdd42d90700d5eba7c8743"} Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.004876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"23527c1a-fd08-4cc7-a6b7-48fe3988ac6e","Type":"ContainerStarted","Data":"68fc7b24395a0c18458d7eb8225f922ef0f7bf08e1059dfa639606e955ff5224"} Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.027867 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" podStartSLOduration=13.027847078 podStartE2EDuration="13.027847078s" podCreationTimestamp="2026-01-26 19:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:17.019816029 +0000 UTC m=+1101.584722781" watchObservedRunningTime="2026-01-26 19:00:17.027847078 +0000 UTC m=+1101.592753820" Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.068589 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t59c4" podStartSLOduration=9.903206765 podStartE2EDuration="14.068567954s" podCreationTimestamp="2026-01-26 19:00:03 +0000 UTC" firstStartedPulling="2026-01-26 19:00:12.029997761 +0000 UTC m=+1096.594904493" lastFinishedPulling="2026-01-26 19:00:16.19535895 +0000 UTC m=+1100.760265682" observedRunningTime="2026-01-26 19:00:17.043538424 +0000 UTC m=+1101.608445166" watchObservedRunningTime="2026-01-26 19:00:17.068567954 +0000 UTC m=+1101.633474696" Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.092982 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.330154812 podStartE2EDuration="27.092961986s" podCreationTimestamp="2026-01-26 18:59:50 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.375980689 +0000 UTC m=+1079.940887421" lastFinishedPulling="2026-01-26 19:00:16.138787863 +0000 UTC m=+1100.703694595" observedRunningTime="2026-01-26 19:00:17.070085025 +0000 UTC m=+1101.634991787" watchObservedRunningTime="2026-01-26 19:00:17.092961986 +0000 UTC m=+1101.657868718" Jan 26 19:00:17 crc kubenswrapper[4770]: I0126 19:00:17.128719 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.80627045 podStartE2EDuration="24.128692348s" podCreationTimestamp="2026-01-26 18:59:53 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.849221687 +0000 UTC m=+1080.414128419" lastFinishedPulling="2026-01-26 19:00:16.171643585 +0000 UTC m=+1100.736550317" observedRunningTime="2026-01-26 19:00:17.108816908 +0000 UTC m=+1101.673723650" watchObservedRunningTime="2026-01-26 19:00:17.128692348 +0000 UTC m=+1101.693599080" Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.015724 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b" containerID="71a16b42364b5406d3cd5e0920a7991c1cac3c848891379f8e840e5114c478a7" exitCode=0 Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.015810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b","Type":"ContainerDied","Data":"71a16b42364b5406d3cd5e0920a7991c1cac3c848891379f8e840e5114c478a7"} Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.020408 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e620ef2b-6951-4c91-8517-c35e07ee8a2a","Type":"ContainerStarted","Data":"23379cb6ae6a717e698b5d27f6d167c105953a6c5a67d30e7da993e51e477e93"} Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.082778 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.971080359 podStartE2EDuration="36.082755197s" podCreationTimestamp="2026-01-26 18:59:42 +0000 UTC" firstStartedPulling="2026-01-26 18:59:54.487586852 +0000 UTC m=+1079.052493584" lastFinishedPulling="2026-01-26 19:00:10.59926169 +0000 UTC m=+1095.164168422" observedRunningTime="2026-01-26 19:00:18.077193265 +0000 UTC m=+1102.642100037" watchObservedRunningTime="2026-01-26 19:00:18.082755197 +0000 UTC m=+1102.647661949" Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.409818 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 19:00:18 crc kubenswrapper[4770]: I0126 19:00:18.482588 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.048294 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b","Type":"ContainerStarted","Data":"8077ce5cba7c3f934a0db67f432046d95da4a5b5787b81df56c4b068c56362c3"} Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.048596 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.080749 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.371005583 podStartE2EDuration="36.080698599s" podCreationTimestamp="2026-01-26 18:59:43 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.226332053 +0000 UTC m=+1079.791238785" lastFinishedPulling="2026-01-26 19:00:10.936025069 +0000 UTC m=+1095.500931801" observedRunningTime="2026-01-26 19:00:19.068614379 +0000 UTC m=+1103.633521111" watchObservedRunningTime="2026-01-26 19:00:19.080698599 +0000 UTC m=+1103.645605341" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.091814 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.203266 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.242422 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 19:00:19 crc kubenswrapper[4770]: I0126 19:00:19.502023 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.054770 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.092329 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.281082 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:00:20 crc kubenswrapper[4770]: E0126 19:00:20.281497 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.281509 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: E0126 19:00:20.283480 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6377faf9-1047-4fe9-a2b8-816f0213cde0" containerName="collect-profiles" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283523 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6377faf9-1047-4fe9-a2b8-816f0213cde0" containerName="collect-profiles" Jan 26 19:00:20 crc kubenswrapper[4770]: E0126 19:00:20.283551 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="init" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283559 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="init" Jan 26 19:00:20 crc kubenswrapper[4770]: E0126 19:00:20.283577 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="init" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283599 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="init" Jan 26 19:00:20 crc kubenswrapper[4770]: E0126 19:00:20.283618 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283624 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283918 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6377faf9-1047-4fe9-a2b8-816f0213cde0" containerName="collect-profiles" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283937 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a727c9-a964-4a43-b1ae-8fc566f92253" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.283970 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4fc6b9-67f6-4fb5-8caf-043e122a1d03" containerName="dnsmasq-dns" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.285100 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.289306 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.289571 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.290185 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mxdmc" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.290368 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.297784 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.336468 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.402290 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-scripts\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.402337 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.402646 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.402718 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.402801 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.403330 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zd8m\" (UniqueName: \"kubernetes.io/projected/49994115-56ea-46a6-a7ae-bff2b9751bc8-kube-api-access-8zd8m\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.403436 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-config\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-config\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-scripts\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505685 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505715 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505762 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.505872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zd8m\" (UniqueName: \"kubernetes.io/projected/49994115-56ea-46a6-a7ae-bff2b9751bc8-kube-api-access-8zd8m\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.506556 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-config\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.507103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.507233 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49994115-56ea-46a6-a7ae-bff2b9751bc8-scripts\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.513685 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.513957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.516408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49994115-56ea-46a6-a7ae-bff2b9751bc8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.530544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zd8m\" (UniqueName: \"kubernetes.io/projected/49994115-56ea-46a6-a7ae-bff2b9751bc8-kube-api-access-8zd8m\") pod \"ovn-northd-0\" (UID: \"49994115-56ea-46a6-a7ae-bff2b9751bc8\") " pod="openstack/ovn-northd-0" Jan 26 19:00:20 crc kubenswrapper[4770]: I0126 19:00:20.622192 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 19:00:21 crc kubenswrapper[4770]: I0126 19:00:21.105605 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 19:00:21 crc kubenswrapper[4770]: W0126 19:00:21.105743 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49994115_56ea_46a6_a7ae_bff2b9751bc8.slice/crio-f561bb238d1ae4e5a39fe0020e38bd3036445c7bea3bb83eb8bef2d7e111c70f WatchSource:0}: Error finding container f561bb238d1ae4e5a39fe0020e38bd3036445c7bea3bb83eb8bef2d7e111c70f: Status 404 returned error can't find the container with id f561bb238d1ae4e5a39fe0020e38bd3036445c7bea3bb83eb8bef2d7e111c70f Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.073670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49994115-56ea-46a6-a7ae-bff2b9751bc8","Type":"ContainerStarted","Data":"1498fbcd5e8d6445cdf879df5c2778cf758c66c8e869ca1f4b67c2f71642879c"} Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.074272 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.074287 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49994115-56ea-46a6-a7ae-bff2b9751bc8","Type":"ContainerStarted","Data":"707023701c3915f3f4e1b5855dd3ac8c4bb8fa84e9920bf010e991e1c2837f25"} Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.074298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"49994115-56ea-46a6-a7ae-bff2b9751bc8","Type":"ContainerStarted","Data":"f561bb238d1ae4e5a39fe0020e38bd3036445c7bea3bb83eb8bef2d7e111c70f"} Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.075685 4770 generic.go:334] "Generic (PLEG): container finished" podID="2d267c82-de7b-48b9-98f5-66d78067778d" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" exitCode=0 Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.075774 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerDied","Data":"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d"} Jan 26 19:00:22 crc kubenswrapper[4770]: I0126 19:00:22.089545 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.613670572 podStartE2EDuration="2.089527332s" podCreationTimestamp="2026-01-26 19:00:20 +0000 UTC" firstStartedPulling="2026-01-26 19:00:21.107814414 +0000 UTC m=+1105.672721146" lastFinishedPulling="2026-01-26 19:00:21.583671174 +0000 UTC m=+1106.148577906" observedRunningTime="2026-01-26 19:00:22.088190796 +0000 UTC m=+1106.653097528" watchObservedRunningTime="2026-01-26 19:00:22.089527332 +0000 UTC m=+1106.654434074" Jan 26 19:00:23 crc kubenswrapper[4770]: I0126 19:00:23.480398 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 19:00:23 crc kubenswrapper[4770]: I0126 19:00:23.480776 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 19:00:23 crc kubenswrapper[4770]: I0126 19:00:23.619837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 19:00:24 crc kubenswrapper[4770]: I0126 19:00:24.190633 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 19:00:24 crc kubenswrapper[4770]: I0126 19:00:24.680542 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:24 crc kubenswrapper[4770]: I0126 19:00:24.741104 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:24 crc kubenswrapper[4770]: I0126 19:00:24.741340 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="dnsmasq-dns" containerID="cri-o://70e5bc6e691e1f4f21f0c96c841110911bab2b82fa21f89faf26b5ac45347ee6" gracePeriod=10 Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.019772 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zbpfl"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.020860 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.043534 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zbpfl"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.100741 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.101328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qbj\" (UniqueName: \"kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.109664 4770 generic.go:334] "Generic (PLEG): container finished" podID="e12683bf-103c-40f3-997c-4b44d151def9" containerID="70e5bc6e691e1f4f21f0c96c841110911bab2b82fa21f89faf26b5ac45347ee6" exitCode=0 Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.109834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" event={"ID":"e12683bf-103c-40f3-997c-4b44d151def9","Type":"ContainerDied","Data":"70e5bc6e691e1f4f21f0c96c841110911bab2b82fa21f89faf26b5ac45347ee6"} Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.150768 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b35e-account-create-update-7wnpn"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.152340 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.157059 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.167328 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b35e-account-create-update-7wnpn"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.203106 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qbj\" (UniqueName: \"kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.203288 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.206856 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.238191 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qbj\" (UniqueName: \"kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj\") pod \"keystone-db-create-zbpfl\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.248645 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pckwh"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.249999 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.266806 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.266975 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.270954 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0242-account-create-update-kpq4x"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.272585 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.278735 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.289052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pckwh"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.294883 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0242-account-create-update-kpq4x"] Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.305050 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.305130 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjlcn\" (UniqueName: \"kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.329223 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.341133 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406312 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xgr\" (UniqueName: \"kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr\") pod \"e12683bf-103c-40f3-997c-4b44d151def9\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406395 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc\") pod \"e12683bf-103c-40f3-997c-4b44d151def9\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406444 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config\") pod \"e12683bf-103c-40f3-997c-4b44d151def9\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406567 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb\") pod \"e12683bf-103c-40f3-997c-4b44d151def9\" (UID: \"e12683bf-103c-40f3-997c-4b44d151def9\") " Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406795 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406826 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjt6j\" (UniqueName: \"kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406886 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.406949 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.407001 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nll67\" (UniqueName: \"kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.407043 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjlcn\" (UniqueName: \"kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.411355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.413458 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr" (OuterVolumeSpecName: "kube-api-access-t4xgr") pod "e12683bf-103c-40f3-997c-4b44d151def9" (UID: "e12683bf-103c-40f3-997c-4b44d151def9"). InnerVolumeSpecName "kube-api-access-t4xgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.433360 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjlcn\" (UniqueName: \"kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn\") pod \"keystone-b35e-account-create-update-7wnpn\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.452029 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config" (OuterVolumeSpecName: "config") pod "e12683bf-103c-40f3-997c-4b44d151def9" (UID: "e12683bf-103c-40f3-997c-4b44d151def9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.453927 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e12683bf-103c-40f3-997c-4b44d151def9" (UID: "e12683bf-103c-40f3-997c-4b44d151def9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.462956 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e12683bf-103c-40f3-997c-4b44d151def9" (UID: "e12683bf-103c-40f3-997c-4b44d151def9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.481967 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.492128 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508436 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508524 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nll67\" (UniqueName: \"kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508607 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjt6j\" (UniqueName: \"kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508666 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508678 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xgr\" (UniqueName: \"kubernetes.io/projected/e12683bf-103c-40f3-997c-4b44d151def9-kube-api-access-t4xgr\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508691 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.508703 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e12683bf-103c-40f3-997c-4b44d151def9-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.509696 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.510299 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.527057 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjt6j\" (UniqueName: \"kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j\") pod \"placement-0242-account-create-update-kpq4x\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.528826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nll67\" (UniqueName: \"kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67\") pod \"placement-db-create-pckwh\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.634423 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pckwh" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.639310 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.866971 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zbpfl"] Jan 26 19:00:25 crc kubenswrapper[4770]: W0126 19:00:25.898699 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b63ae4_9e1b_4518_b09f_3b5f3893a51e.slice/crio-bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5 WatchSource:0}: Error finding container bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5: Status 404 returned error can't find the container with id bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5 Jan 26 19:00:25 crc kubenswrapper[4770]: I0126 19:00:25.983008 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0242-account-create-update-kpq4x"] Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.002203 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b35e-account-create-update-7wnpn"] Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.110950 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pckwh"] Jan 26 19:00:26 crc kubenswrapper[4770]: W0126 19:00:26.118155 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb724a151_32e2_4518_8f64_9d06b50acd55.slice/crio-2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709 WatchSource:0}: Error finding container 2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709: Status 404 returned error can't find the container with id 2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709 Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.119592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpfl" event={"ID":"93b63ae4-9e1b-4518-b09f-3b5f3893a51e","Type":"ContainerStarted","Data":"bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5"} Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.131032 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" event={"ID":"e12683bf-103c-40f3-997c-4b44d151def9","Type":"ContainerDied","Data":"cadad39ab60402991461dc58a9b177437e3fa5454b60922886535d791f4f68a0"} Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.131065 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56965dc457-dqf6b" Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.131080 4770 scope.go:117] "RemoveContainer" containerID="70e5bc6e691e1f4f21f0c96c841110911bab2b82fa21f89faf26b5ac45347ee6" Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.135298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b35e-account-create-update-7wnpn" event={"ID":"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b","Type":"ContainerStarted","Data":"6b5f9a3678c496b81016458e5e661143e7c5d40c82865ccb56ab145c58379459"} Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.136872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0242-account-create-update-kpq4x" event={"ID":"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4","Type":"ContainerStarted","Data":"8b844dffcab2c5551e5508c756179a57a81e5fb2f5ce9b175594cecad33007d2"} Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.161277 4770 scope.go:117] "RemoveContainer" containerID="ae5f6278b806c7ba1381f68d8b924a55bce5929ffa5f022d28a9a632c7d40e15" Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.165478 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.175780 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56965dc457-dqf6b"] Jan 26 19:00:26 crc kubenswrapper[4770]: I0126 19:00:26.300632 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.161183 4770 generic.go:334] "Generic (PLEG): container finished" podID="d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" containerID="8840c5670b533d10530f2c00303dd6291a56339e7c629941ed2c2bf229eebda8" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.161247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0242-account-create-update-kpq4x" event={"ID":"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4","Type":"ContainerDied","Data":"8840c5670b533d10530f2c00303dd6291a56339e7c629941ed2c2bf229eebda8"} Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.162952 4770 generic.go:334] "Generic (PLEG): container finished" podID="93b63ae4-9e1b-4518-b09f-3b5f3893a51e" containerID="1df95aa8bbb4d4cf4cc938ec31226afdccfdaf03e01e29c8a65a2addc3e7b498" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.162998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpfl" event={"ID":"93b63ae4-9e1b-4518-b09f-3b5f3893a51e","Type":"ContainerDied","Data":"1df95aa8bbb4d4cf4cc938ec31226afdccfdaf03e01e29c8a65a2addc3e7b498"} Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.164161 4770 generic.go:334] "Generic (PLEG): container finished" podID="b724a151-32e2-4518-8f64-9d06b50acd55" containerID="c1d3dab9457a2832e8c598970388929b33fa495a42d90bf31e7933d0f4ac9939" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.164201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pckwh" event={"ID":"b724a151-32e2-4518-8f64-9d06b50acd55","Type":"ContainerDied","Data":"c1d3dab9457a2832e8c598970388929b33fa495a42d90bf31e7933d0f4ac9939"} Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.164216 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pckwh" event={"ID":"b724a151-32e2-4518-8f64-9d06b50acd55","Type":"ContainerStarted","Data":"2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709"} Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.175544 4770 generic.go:334] "Generic (PLEG): container finished" podID="e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" containerID="b9f7e457601340fabc71e1711cc35493fbf51a197bf483fb270a69dbdc35aeae" exitCode=0 Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.176409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b35e-account-create-update-7wnpn" event={"ID":"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b","Type":"ContainerDied","Data":"b9f7e457601340fabc71e1711cc35493fbf51a197bf483fb270a69dbdc35aeae"} Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.206963 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:00:27 crc kubenswrapper[4770]: E0126 19:00:27.207379 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="init" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.207395 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="init" Jan 26 19:00:27 crc kubenswrapper[4770]: E0126 19:00:27.207418 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="dnsmasq-dns" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.207426 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="dnsmasq-dns" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.207621 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12683bf-103c-40f3-997c-4b44d151def9" containerName="dnsmasq-dns" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.208772 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.224358 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.259655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.259731 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.259773 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqt8l\" (UniqueName: \"kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.259812 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.259827 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.294758 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-6pgvv"] Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.295870 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.305228 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-0d4a-account-create-update-42d77"] Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.306295 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.313002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.318063 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0d4a-account-create-update-42d77"] Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.328730 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-6pgvv"] Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.361943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362006 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362043 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362160 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgt4\" (UniqueName: \"kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362250 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn95c\" (UniqueName: \"kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362292 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqt8l\" (UniqueName: \"kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.362895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.363152 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.363479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.363563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.394634 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqt8l\" (UniqueName: \"kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l\") pod \"dnsmasq-dns-c6ff9699-8rfv9\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.463731 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrgt4\" (UniqueName: \"kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.463837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn95c\" (UniqueName: \"kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.463920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.463956 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.464802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.465051 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.481266 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn95c\" (UniqueName: \"kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c\") pod \"watcher-db-create-6pgvv\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.485735 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrgt4\" (UniqueName: \"kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4\") pod \"watcher-0d4a-account-create-update-42d77\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.546999 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.618006 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.635288 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:27 crc kubenswrapper[4770]: I0126 19:00:27.791023 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12683bf-103c-40f3-997c-4b44d151def9" path="/var/lib/kubelet/pods/e12683bf-103c-40f3-997c-4b44d151def9/volumes" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.117628 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.130676 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-6pgvv"] Jan 26 19:00:28 crc kubenswrapper[4770]: W0126 19:00:28.135327 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode710d1c6_ece5_400d_b061_8ad6cf59c5b6.slice/crio-281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe WatchSource:0}: Error finding container 281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe: Status 404 returned error can't find the container with id 281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe Jan 26 19:00:28 crc kubenswrapper[4770]: W0126 19:00:28.136969 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf78baf61_9a55_4017_a0fe_90336e976053.slice/crio-a2ba70c76ddbef9d73b4a10e0c16a5dc5ee364b0bd160c3c301fd9ef29de34f8 WatchSource:0}: Error finding container a2ba70c76ddbef9d73b4a10e0c16a5dc5ee364b0bd160c3c301fd9ef29de34f8: Status 404 returned error can't find the container with id a2ba70c76ddbef9d73b4a10e0c16a5dc5ee364b0bd160c3c301fd9ef29de34f8 Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.206965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" event={"ID":"f78baf61-9a55-4017-a0fe-90336e976053","Type":"ContainerStarted","Data":"a2ba70c76ddbef9d73b4a10e0c16a5dc5ee364b0bd160c3c301fd9ef29de34f8"} Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.209075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6pgvv" event={"ID":"e710d1c6-ece5-400d-b061-8ad6cf59c5b6","Type":"ContainerStarted","Data":"281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe"} Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.323315 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0d4a-account-create-update-42d77"] Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.341875 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.350500 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.354334 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.354376 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.354596 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kn966" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.354690 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.413008 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.414992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.415063 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3117c9b-d620-4686-afa7-315bbae0e328-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.415103 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.415201 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-cache\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.415287 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dbk\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-kube-api-access-75dbk\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.415370 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-lock\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.521443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.521907 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3117c9b-d620-4686-afa7-315bbae0e328-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.521950 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.522003 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-cache\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.522057 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75dbk\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-kube-api-access-75dbk\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: E0126 19:00:28.521693 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:28 crc kubenswrapper[4770]: E0126 19:00:28.522612 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:28 crc kubenswrapper[4770]: E0126 19:00:28.522663 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:00:29.022642541 +0000 UTC m=+1113.587549273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.522862 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.523346 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-lock\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.523388 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-cache\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.527310 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f3117c9b-d620-4686-afa7-315bbae0e328-lock\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.532995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3117c9b-d620-4686-afa7-315bbae0e328-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.544251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75dbk\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-kube-api-access-75dbk\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.569981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.673028 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.726516 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts\") pod \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.726561 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjt6j\" (UniqueName: \"kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j\") pod \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\" (UID: \"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.731119 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" (UID: "d76dedbd-e05f-4893-a0b5-9c68a83eb5f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.736560 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j" (OuterVolumeSpecName: "kube-api-access-qjt6j") pod "d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" (UID: "d76dedbd-e05f-4893-a0b5-9c68a83eb5f4"). InnerVolumeSpecName "kube-api-access-qjt6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.834232 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vx59z"] Jan 26 19:00:28 crc kubenswrapper[4770]: E0126 19:00:28.834731 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" containerName="mariadb-account-create-update" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.834745 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" containerName="mariadb-account-create-update" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.834984 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" containerName="mariadb-account-create-update" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.835734 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.836046 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.836108 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjt6j\" (UniqueName: \"kubernetes.io/projected/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4-kube-api-access-qjt6j\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.837650 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.837812 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.840965 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.867303 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vx59z"] Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.868932 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pckwh" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.874022 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.923361 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.937107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjlcn\" (UniqueName: \"kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn\") pod \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.937207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts\") pod \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\" (UID: \"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.938733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts\") pod \"b724a151-32e2-4518-8f64-9d06b50acd55\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.938850 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nll67\" (UniqueName: \"kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67\") pod \"b724a151-32e2-4518-8f64-9d06b50acd55\" (UID: \"b724a151-32e2-4518-8f64-9d06b50acd55\") " Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939100 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939182 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939223 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939254 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4x8g\" (UniqueName: \"kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939321 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939394 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.939172 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" (UID: "e12040f8-22b1-43fe-a86f-6d39c1ac4c8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.940500 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b724a151-32e2-4518-8f64-9d06b50acd55" (UID: "b724a151-32e2-4518-8f64-9d06b50acd55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.944268 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn" (OuterVolumeSpecName: "kube-api-access-kjlcn") pod "e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" (UID: "e12040f8-22b1-43fe-a86f-6d39c1ac4c8b"). InnerVolumeSpecName "kube-api-access-kjlcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:28 crc kubenswrapper[4770]: I0126 19:00:28.955864 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67" (OuterVolumeSpecName: "kube-api-access-nll67") pod "b724a151-32e2-4518-8f64-9d06b50acd55" (UID: "b724a151-32e2-4518-8f64-9d06b50acd55"). InnerVolumeSpecName "kube-api-access-nll67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.040451 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts\") pod \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.040623 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4qbj\" (UniqueName: \"kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj\") pod \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\" (UID: \"93b63ae4-9e1b-4518-b09f-3b5f3893a51e\") " Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4x8g\" (UniqueName: \"kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041292 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041348 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041385 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: E0126 19:00:29.041388 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:29 crc kubenswrapper[4770]: E0126 19:00:29.041408 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041447 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nll67\" (UniqueName: \"kubernetes.io/projected/b724a151-32e2-4518-8f64-9d06b50acd55-kube-api-access-nll67\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.041481 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93b63ae4-9e1b-4518-b09f-3b5f3893a51e" (UID: "93b63ae4-9e1b-4518-b09f-3b5f3893a51e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: E0126 19:00:29.042218 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:00:30.041443553 +0000 UTC m=+1114.606350285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042319 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjlcn\" (UniqueName: \"kubernetes.io/projected/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-kube-api-access-kjlcn\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042355 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042373 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042378 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b724a151-32e2-4518-8f64-9d06b50acd55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.042406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.045813 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.046268 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.046926 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj" (OuterVolumeSpecName: "kube-api-access-r4qbj") pod "93b63ae4-9e1b-4518-b09f-3b5f3893a51e" (UID: "93b63ae4-9e1b-4518-b09f-3b5f3893a51e"). InnerVolumeSpecName "kube-api-access-r4qbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.049895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.074390 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4x8g\" (UniqueName: \"kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g\") pod \"swift-ring-rebalance-vx59z\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.145718 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.145752 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4qbj\" (UniqueName: \"kubernetes.io/projected/93b63ae4-9e1b-4518-b09f-3b5f3893a51e-kube-api-access-r4qbj\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.235988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpfl" event={"ID":"93b63ae4-9e1b-4518-b09f-3b5f3893a51e","Type":"ContainerDied","Data":"bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.236029 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc83c8fb18afb0961cad9e402fcd79b129a44b73ae5c6e628ff0f8528c1522a5" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.236091 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpfl" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.237993 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.259626 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pckwh" event={"ID":"b724a151-32e2-4518-8f64-9d06b50acd55","Type":"ContainerDied","Data":"2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.259664 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e28297e8090bc075ae79cb0fc942289ca0e3b43d4c508409a193571316a2709" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.259649 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pckwh" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.262950 4770 generic.go:334] "Generic (PLEG): container finished" podID="e935c454-cbcc-4b53-a12e-4532e2043189" containerID="05aec97748948c51648b73e9b3cda44e32f56912b7cb5597778e6de5ca0f1a52" exitCode=0 Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.263029 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0d4a-account-create-update-42d77" event={"ID":"e935c454-cbcc-4b53-a12e-4532e2043189","Type":"ContainerDied","Data":"05aec97748948c51648b73e9b3cda44e32f56912b7cb5597778e6de5ca0f1a52"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.263063 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0d4a-account-create-update-42d77" event={"ID":"e935c454-cbcc-4b53-a12e-4532e2043189","Type":"ContainerStarted","Data":"6c0c5adcc50cd638c22e666353a565202fa755967ca2ce5b7f140995d03a61fb"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.266657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b35e-account-create-update-7wnpn" event={"ID":"e12040f8-22b1-43fe-a86f-6d39c1ac4c8b","Type":"ContainerDied","Data":"6b5f9a3678c496b81016458e5e661143e7c5d40c82865ccb56ab145c58379459"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.266691 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5f9a3678c496b81016458e5e661143e7c5d40c82865ccb56ab145c58379459" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.266753 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b35e-account-create-update-7wnpn" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.270215 4770 generic.go:334] "Generic (PLEG): container finished" podID="f78baf61-9a55-4017-a0fe-90336e976053" containerID="dc58b42d81f706a7166ec7cb9b5d32aec628b4839aa6a7d6199ab82b14e380c6" exitCode=0 Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.270360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" event={"ID":"f78baf61-9a55-4017-a0fe-90336e976053","Type":"ContainerDied","Data":"dc58b42d81f706a7166ec7cb9b5d32aec628b4839aa6a7d6199ab82b14e380c6"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.273088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0","Type":"ContainerStarted","Data":"d4228322eb071a12a8b1f78e5129821c87c0f48a6f2d5ec1c7d4ead922ff8815"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.273553 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.280065 4770 generic.go:334] "Generic (PLEG): container finished" podID="e710d1c6-ece5-400d-b061-8ad6cf59c5b6" containerID="ad0708d6bbfef49a6d1035fa276733220f5b711e07f89383377b05cb112f3ab2" exitCode=0 Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.280168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6pgvv" event={"ID":"e710d1c6-ece5-400d-b061-8ad6cf59c5b6","Type":"ContainerDied","Data":"ad0708d6bbfef49a6d1035fa276733220f5b711e07f89383377b05cb112f3ab2"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.285537 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0242-account-create-update-kpq4x" event={"ID":"d76dedbd-e05f-4893-a0b5-9c68a83eb5f4","Type":"ContainerDied","Data":"8b844dffcab2c5551e5508c756179a57a81e5fb2f5ce9b175594cecad33007d2"} Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.285575 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b844dffcab2c5551e5508c756179a57a81e5fb2f5ce9b175594cecad33007d2" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.285643 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0242-account-create-update-kpq4x" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.348743 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.441156007 podStartE2EDuration="43.348720991s" podCreationTimestamp="2026-01-26 18:59:46 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.294531177 +0000 UTC m=+1079.859437909" lastFinishedPulling="2026-01-26 19:00:28.202096161 +0000 UTC m=+1112.767002893" observedRunningTime="2026-01-26 19:00:29.337840984 +0000 UTC m=+1113.902747726" watchObservedRunningTime="2026-01-26 19:00:29.348720991 +0000 UTC m=+1113.913627733" Jan 26 19:00:29 crc kubenswrapper[4770]: I0126 19:00:29.711935 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vx59z"] Jan 26 19:00:29 crc kubenswrapper[4770]: W0126 19:00:29.713564 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb06b58_7f92_4704_909b_3c591476f04c.slice/crio-868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94 WatchSource:0}: Error finding container 868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94: Status 404 returned error can't find the container with id 868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94 Jan 26 19:00:30 crc kubenswrapper[4770]: I0126 19:00:30.067686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:30 crc kubenswrapper[4770]: E0126 19:00:30.067952 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:30 crc kubenswrapper[4770]: E0126 19:00:30.068167 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:30 crc kubenswrapper[4770]: E0126 19:00:30.068242 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:00:32.068219872 +0000 UTC m=+1116.633126604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:30 crc kubenswrapper[4770]: I0126 19:00:30.295573 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vx59z" event={"ID":"ceb06b58-7f92-4704-909b-3c591476f04c","Type":"ContainerStarted","Data":"868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94"} Jan 26 19:00:30 crc kubenswrapper[4770]: I0126 19:00:30.330445 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:00:30 crc kubenswrapper[4770]: I0126 19:00:30.330504 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:00:31 crc kubenswrapper[4770]: I0126 19:00:31.308694 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" event={"ID":"f78baf61-9a55-4017-a0fe-90336e976053","Type":"ContainerStarted","Data":"80c4f68ee7592030704ae039b3bdd047dce73c02995cb5e62e76a7af4b1d529b"} Jan 26 19:00:31 crc kubenswrapper[4770]: I0126 19:00:31.309324 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:31 crc kubenswrapper[4770]: I0126 19:00:31.329043 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" podStartSLOduration=4.329025078 podStartE2EDuration="4.329025078s" podCreationTimestamp="2026-01-26 19:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:31.327112846 +0000 UTC m=+1115.892019568" watchObservedRunningTime="2026-01-26 19:00:31.329025078 +0000 UTC m=+1115.893931810" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.114437 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dtkfn"] Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.114794 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" containerName="mariadb-account-create-update" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.114807 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" containerName="mariadb-account-create-update" Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.114820 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b63ae4-9e1b-4518-b09f-3b5f3893a51e" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.114827 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b63ae4-9e1b-4518-b09f-3b5f3893a51e" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.114851 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b724a151-32e2-4518-8f64-9d06b50acd55" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.114858 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b724a151-32e2-4518-8f64-9d06b50acd55" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.115008 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b724a151-32e2-4518-8f64-9d06b50acd55" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.115017 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" containerName="mariadb-account-create-update" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.115035 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b63ae4-9e1b-4518-b09f-3b5f3893a51e" containerName="mariadb-database-create" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.115559 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.116816 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.117083 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.117122 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:32 crc kubenswrapper[4770]: E0126 19:00:32.117193 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:00:36.117160532 +0000 UTC m=+1120.682067264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.120005 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.128648 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dtkfn"] Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.218975 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwcct\" (UniqueName: \"kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.219256 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.320547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwcct\" (UniqueName: \"kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.320724 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.321542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.359646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwcct\" (UniqueName: \"kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct\") pod \"root-account-create-update-dtkfn\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:32 crc kubenswrapper[4770]: I0126 19:00:32.438315 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.444610 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.541656 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts\") pod \"e935c454-cbcc-4b53-a12e-4532e2043189\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.541795 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrgt4\" (UniqueName: \"kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4\") pod \"e935c454-cbcc-4b53-a12e-4532e2043189\" (UID: \"e935c454-cbcc-4b53-a12e-4532e2043189\") " Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.542348 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e935c454-cbcc-4b53-a12e-4532e2043189" (UID: "e935c454-cbcc-4b53-a12e-4532e2043189"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.549407 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4" (OuterVolumeSpecName: "kube-api-access-zrgt4") pod "e935c454-cbcc-4b53-a12e-4532e2043189" (UID: "e935c454-cbcc-4b53-a12e-4532e2043189"). InnerVolumeSpecName "kube-api-access-zrgt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.631732 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.644099 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrgt4\" (UniqueName: \"kubernetes.io/projected/e935c454-cbcc-4b53-a12e-4532e2043189-kube-api-access-zrgt4\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.644132 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935c454-cbcc-4b53-a12e-4532e2043189-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.744977 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts\") pod \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.745130 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn95c\" (UniqueName: \"kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c\") pod \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\" (UID: \"e710d1c6-ece5-400d-b061-8ad6cf59c5b6\") " Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.746588 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e710d1c6-ece5-400d-b061-8ad6cf59c5b6" (UID: "e710d1c6-ece5-400d-b061-8ad6cf59c5b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.749932 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c" (OuterVolumeSpecName: "kube-api-access-fn95c") pod "e710d1c6-ece5-400d-b061-8ad6cf59c5b6" (UID: "e710d1c6-ece5-400d-b061-8ad6cf59c5b6"). InnerVolumeSpecName "kube-api-access-fn95c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.847688 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:33 crc kubenswrapper[4770]: I0126 19:00:33.848067 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn95c\" (UniqueName: \"kubernetes.io/projected/e710d1c6-ece5-400d-b061-8ad6cf59c5b6-kube-api-access-fn95c\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.016409 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dtkfn"] Jan 26 19:00:34 crc kubenswrapper[4770]: W0126 19:00:34.017352 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5496b748_8842_4e01_8f55_64a318d702af.slice/crio-7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f WatchSource:0}: Error finding container 7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f: Status 404 returned error can't find the container with id 7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.335917 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-6pgvv" event={"ID":"e710d1c6-ece5-400d-b061-8ad6cf59c5b6","Type":"ContainerDied","Data":"281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe"} Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.335961 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281eeaa376939d98d3172be55ad5ecf372f9d0b618103f545cb0d35e5af274fe" Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.335983 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-6pgvv" Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.337023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dtkfn" event={"ID":"5496b748-8842-4e01-8f55-64a318d702af","Type":"ContainerStarted","Data":"7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f"} Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.341816 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerStarted","Data":"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63"} Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.344008 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0d4a-account-create-update-42d77" event={"ID":"e935c454-cbcc-4b53-a12e-4532e2043189","Type":"ContainerDied","Data":"6c0c5adcc50cd638c22e666353a565202fa755967ca2ce5b7f140995d03a61fb"} Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.344068 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0c5adcc50cd638c22e666353a565202fa755967ca2ce5b7f140995d03a61fb" Jan 26 19:00:34 crc kubenswrapper[4770]: I0126 19:00:34.344152 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0d4a-account-create-update-42d77" Jan 26 19:00:36 crc kubenswrapper[4770]: I0126 19:00:36.191750 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:36 crc kubenswrapper[4770]: E0126 19:00:36.192102 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:36 crc kubenswrapper[4770]: E0126 19:00:36.192256 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:36 crc kubenswrapper[4770]: E0126 19:00:36.192345 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:00:44.192317584 +0000 UTC m=+1128.757224316 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:37 crc kubenswrapper[4770]: I0126 19:00:37.167141 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 19:00:37 crc kubenswrapper[4770]: I0126 19:00:37.548903 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:00:37 crc kubenswrapper[4770]: I0126 19:00:37.616465 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:37 crc kubenswrapper[4770]: I0126 19:00:37.616712 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="dnsmasq-dns" containerID="cri-o://b9a9c12526ec35d9a1d5d376000973cb6db5a4083045e8eda474385f1c7a76ed" gracePeriod=10 Jan 26 19:00:38 crc kubenswrapper[4770]: I0126 19:00:38.382882 4770 generic.go:334] "Generic (PLEG): container finished" podID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerID="b9a9c12526ec35d9a1d5d376000973cb6db5a4083045e8eda474385f1c7a76ed" exitCode=0 Jan 26 19:00:38 crc kubenswrapper[4770]: I0126 19:00:38.382921 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" event={"ID":"de4cef66-1301-4ef9-bac5-f416e92ef9e5","Type":"ContainerDied","Data":"b9a9c12526ec35d9a1d5d376000973cb6db5a4083045e8eda474385f1c7a76ed"} Jan 26 19:00:39 crc kubenswrapper[4770]: I0126 19:00:39.680140 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 26 19:00:40 crc kubenswrapper[4770]: I0126 19:00:40.400944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerStarted","Data":"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824"} Jan 26 19:00:40 crc kubenswrapper[4770]: I0126 19:00:40.694485 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.146174 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.217130 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb\") pod \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.217290 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc\") pod \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.217333 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlx7t\" (UniqueName: \"kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t\") pod \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.217388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config\") pod \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.217484 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb\") pod \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\" (UID: \"de4cef66-1301-4ef9-bac5-f416e92ef9e5\") " Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.222184 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t" (OuterVolumeSpecName: "kube-api-access-hlx7t") pod "de4cef66-1301-4ef9-bac5-f416e92ef9e5" (UID: "de4cef66-1301-4ef9-bac5-f416e92ef9e5"). InnerVolumeSpecName "kube-api-access-hlx7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.258095 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config" (OuterVolumeSpecName: "config") pod "de4cef66-1301-4ef9-bac5-f416e92ef9e5" (UID: "de4cef66-1301-4ef9-bac5-f416e92ef9e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.258375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de4cef66-1301-4ef9-bac5-f416e92ef9e5" (UID: "de4cef66-1301-4ef9-bac5-f416e92ef9e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.259717 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de4cef66-1301-4ef9-bac5-f416e92ef9e5" (UID: "de4cef66-1301-4ef9-bac5-f416e92ef9e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.263200 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de4cef66-1301-4ef9-bac5-f416e92ef9e5" (UID: "de4cef66-1301-4ef9-bac5-f416e92ef9e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.319823 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.319865 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.319878 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.319890 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de4cef66-1301-4ef9-bac5-f416e92ef9e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.319900 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlx7t\" (UniqueName: \"kubernetes.io/projected/de4cef66-1301-4ef9-bac5-f416e92ef9e5-kube-api-access-hlx7t\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.423026 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" event={"ID":"de4cef66-1301-4ef9-bac5-f416e92ef9e5","Type":"ContainerDied","Data":"593e82b7740c3eb4dd07accc699d6218a651a0f2ad06e7a4374e397beb6f7d12"} Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.423120 4770 scope.go:117] "RemoveContainer" containerID="b9a9c12526ec35d9a1d5d376000973cb6db5a4083045e8eda474385f1c7a76ed" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.423046 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f7dc4f659-x5dd2" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.426541 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vx59z" event={"ID":"ceb06b58-7f92-4704-909b-3c591476f04c","Type":"ContainerStarted","Data":"c43085ef5d72373f3e5fa208758821156c978518f9ab01b59dc8ef7e963164e5"} Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.432209 4770 generic.go:334] "Generic (PLEG): container finished" podID="5496b748-8842-4e01-8f55-64a318d702af" containerID="80ad2f4535615622c532dd8fefd8689692ed919e34cd83d0fa8991c1c8d9b3bb" exitCode=0 Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.432255 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dtkfn" event={"ID":"5496b748-8842-4e01-8f55-64a318d702af","Type":"ContainerDied","Data":"80ad2f4535615622c532dd8fefd8689692ed919e34cd83d0fa8991c1c8d9b3bb"} Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.455787 4770 scope.go:117] "RemoveContainer" containerID="be515865a0784c8013b9d6c76d4465db58e2a20629c5d03d10d484377d15b8ec" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.459386 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vx59z" podStartSLOduration=2.240748331 podStartE2EDuration="14.459359789s" podCreationTimestamp="2026-01-26 19:00:28 +0000 UTC" firstStartedPulling="2026-01-26 19:00:29.715725329 +0000 UTC m=+1114.280632061" lastFinishedPulling="2026-01-26 19:00:41.934336777 +0000 UTC m=+1126.499243519" observedRunningTime="2026-01-26 19:00:42.44915325 +0000 UTC m=+1127.014060002" watchObservedRunningTime="2026-01-26 19:00:42.459359789 +0000 UTC m=+1127.024266521" Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.485535 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:42 crc kubenswrapper[4770]: I0126 19:00:42.494591 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f7dc4f659-x5dd2"] Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.776090 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" path="/var/lib/kubelet/pods/de4cef66-1301-4ef9-bac5-f416e92ef9e5/volumes" Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.815976 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.945906 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts\") pod \"5496b748-8842-4e01-8f55-64a318d702af\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.946824 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwcct\" (UniqueName: \"kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct\") pod \"5496b748-8842-4e01-8f55-64a318d702af\" (UID: \"5496b748-8842-4e01-8f55-64a318d702af\") " Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.946511 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5496b748-8842-4e01-8f55-64a318d702af" (UID: "5496b748-8842-4e01-8f55-64a318d702af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.947988 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5496b748-8842-4e01-8f55-64a318d702af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:43 crc kubenswrapper[4770]: I0126 19:00:43.957437 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct" (OuterVolumeSpecName: "kube-api-access-zwcct") pod "5496b748-8842-4e01-8f55-64a318d702af" (UID: "5496b748-8842-4e01-8f55-64a318d702af"). InnerVolumeSpecName "kube-api-access-zwcct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.049861 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwcct\" (UniqueName: \"kubernetes.io/projected/5496b748-8842-4e01-8f55-64a318d702af-kube-api-access-zwcct\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.259517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:00:44 crc kubenswrapper[4770]: E0126 19:00:44.259777 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 19:00:44 crc kubenswrapper[4770]: E0126 19:00:44.259989 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 19:00:44 crc kubenswrapper[4770]: E0126 19:00:44.260060 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift podName:f3117c9b-d620-4686-afa7-315bbae0e328 nodeName:}" failed. No retries permitted until 2026-01-26 19:01:00.260035254 +0000 UTC m=+1144.824941986 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift") pod "swift-storage-0" (UID: "f3117c9b-d620-4686-afa7-315bbae0e328") : configmap "swift-ring-files" not found Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.452913 4770 generic.go:334] "Generic (PLEG): container finished" podID="176a0205-a131-4510-bcf5-420945c4c6ee" containerID="fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454" exitCode=0 Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.452967 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerDied","Data":"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454"} Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.456319 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dtkfn" event={"ID":"5496b748-8842-4e01-8f55-64a318d702af","Type":"ContainerDied","Data":"7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f"} Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.456348 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b4d1992113c27de8921981ab88d664df616ca154a4c816999555cfd4a6f9d3f" Jan 26 19:00:44 crc kubenswrapper[4770]: I0126 19:00:44.456385 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dtkfn" Jan 26 19:00:45 crc kubenswrapper[4770]: I0126 19:00:45.467724 4770 generic.go:334] "Generic (PLEG): container finished" podID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerID="5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698" exitCode=0 Jan 26 19:00:45 crc kubenswrapper[4770]: I0126 19:00:45.467819 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerDied","Data":"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698"} Jan 26 19:00:45 crc kubenswrapper[4770]: I0126 19:00:45.472007 4770 generic.go:334] "Generic (PLEG): container finished" podID="7e3d608a-c9d7-4a29-b45a-0c175851fdbc" containerID="068bfa0e96ae78b6bff1f8efff78f9f17ffb6d1a412b4ca564ca833336e71fc8" exitCode=0 Jan 26 19:00:45 crc kubenswrapper[4770]: I0126 19:00:45.472042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7e3d608a-c9d7-4a29-b45a-0c175851fdbc","Type":"ContainerDied","Data":"068bfa0e96ae78b6bff1f8efff78f9f17ffb6d1a412b4ca564ca833336e71fc8"} Jan 26 19:00:45 crc kubenswrapper[4770]: I0126 19:00:45.997337 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hgfvf" podUID="9d2095b9-c866-4424-aa95-31718bd65d61" containerName="ovn-controller" probeResult="failure" output=< Jan 26 19:00:45 crc kubenswrapper[4770]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 19:00:45 crc kubenswrapper[4770]: > Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.048891 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.061992 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dtdfk" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282217 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hgfvf-config-dmmxz"] Jan 26 19:00:46 crc kubenswrapper[4770]: E0126 19:00:46.282866 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="init" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282883 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="init" Jan 26 19:00:46 crc kubenswrapper[4770]: E0126 19:00:46.282899 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e710d1c6-ece5-400d-b061-8ad6cf59c5b6" containerName="mariadb-database-create" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282907 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e710d1c6-ece5-400d-b061-8ad6cf59c5b6" containerName="mariadb-database-create" Jan 26 19:00:46 crc kubenswrapper[4770]: E0126 19:00:46.282916 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e935c454-cbcc-4b53-a12e-4532e2043189" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282922 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e935c454-cbcc-4b53-a12e-4532e2043189" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: E0126 19:00:46.282947 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5496b748-8842-4e01-8f55-64a318d702af" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282953 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5496b748-8842-4e01-8f55-64a318d702af" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: E0126 19:00:46.282961 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="dnsmasq-dns" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.282966 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="dnsmasq-dns" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.283108 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e710d1c6-ece5-400d-b061-8ad6cf59c5b6" containerName="mariadb-database-create" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.283119 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e935c454-cbcc-4b53-a12e-4532e2043189" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.283128 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5496b748-8842-4e01-8f55-64a318d702af" containerName="mariadb-account-create-update" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.283143 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4cef66-1301-4ef9-bac5-f416e92ef9e5" containerName="dnsmasq-dns" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.283669 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.286071 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.305922 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hgfvf-config-dmmxz"] Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.395872 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.395920 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnrm\" (UniqueName: \"kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.395941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.396002 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.396075 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.396092 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.483275 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"7e3d608a-c9d7-4a29-b45a-0c175851fdbc","Type":"ContainerStarted","Data":"a2490b76048467d7d10b1e75d9d804ce80fddaec64cebdc61e240611f4a33b7b"} Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.483505 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.485853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerStarted","Data":"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735"} Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.486029 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.487944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerStarted","Data":"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba"} Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.488135 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.490299 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerStarted","Data":"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1"} Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497301 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497352 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnrm\" (UniqueName: \"kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497578 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497578 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.497582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.498514 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.499923 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.534403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnrm\" (UniqueName: \"kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm\") pod \"ovn-controller-hgfvf-config-dmmxz\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.545347 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=49.813624284 podStartE2EDuration="1m6.545326746s" podCreationTimestamp="2026-01-26 18:59:40 +0000 UTC" firstStartedPulling="2026-01-26 18:59:53.839195866 +0000 UTC m=+1078.404102598" lastFinishedPulling="2026-01-26 19:00:10.570898328 +0000 UTC m=+1095.135805060" observedRunningTime="2026-01-26 19:00:46.539930459 +0000 UTC m=+1131.104837181" watchObservedRunningTime="2026-01-26 19:00:46.545326746 +0000 UTC m=+1131.110233478" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.589418 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=9.875493424 podStartE2EDuration="59.589400169s" podCreationTimestamp="2026-01-26 18:59:47 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.506845414 +0000 UTC m=+1080.071752146" lastFinishedPulling="2026-01-26 19:00:45.220752129 +0000 UTC m=+1129.785658891" observedRunningTime="2026-01-26 19:00:46.582295825 +0000 UTC m=+1131.147202557" watchObservedRunningTime="2026-01-26 19:00:46.589400169 +0000 UTC m=+1131.154306901" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.601997 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:46 crc kubenswrapper[4770]: I0126 19:00:46.628015 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.897202786 podStartE2EDuration="1m6.627993133s" podCreationTimestamp="2026-01-26 18:59:40 +0000 UTC" firstStartedPulling="2026-01-26 18:59:53.841089508 +0000 UTC m=+1078.405996240" lastFinishedPulling="2026-01-26 19:00:10.571879855 +0000 UTC m=+1095.136786587" observedRunningTime="2026-01-26 19:00:46.625530946 +0000 UTC m=+1131.190437688" watchObservedRunningTime="2026-01-26 19:00:46.627993133 +0000 UTC m=+1131.192899865" Jan 26 19:00:47 crc kubenswrapper[4770]: I0126 19:00:47.136824 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.991510302 podStartE2EDuration="1m7.136802752s" podCreationTimestamp="2026-01-26 18:59:40 +0000 UTC" firstStartedPulling="2026-01-26 18:59:55.271761958 +0000 UTC m=+1079.836668690" lastFinishedPulling="2026-01-26 19:00:10.417054378 +0000 UTC m=+1094.981961140" observedRunningTime="2026-01-26 19:00:46.713080776 +0000 UTC m=+1131.277987528" watchObservedRunningTime="2026-01-26 19:00:47.136802752 +0000 UTC m=+1131.701709484" Jan 26 19:00:47 crc kubenswrapper[4770]: I0126 19:00:47.145504 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hgfvf-config-dmmxz"] Jan 26 19:00:47 crc kubenswrapper[4770]: W0126 19:00:47.151900 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea8197d_5f5f_4b0b_8e6a_493e72d5cadb.slice/crio-ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c WatchSource:0}: Error finding container ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c: Status 404 returned error can't find the container with id ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c Jan 26 19:00:47 crc kubenswrapper[4770]: I0126 19:00:47.498830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf-config-dmmxz" event={"ID":"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb","Type":"ContainerStarted","Data":"1a7747b6b1e9668f53c62b6fda82b664cc40c65af97e3de8a0afa5897f46c891"} Jan 26 19:00:47 crc kubenswrapper[4770]: I0126 19:00:47.499149 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf-config-dmmxz" event={"ID":"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb","Type":"ContainerStarted","Data":"ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c"} Jan 26 19:00:47 crc kubenswrapper[4770]: I0126 19:00:47.522905 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hgfvf-config-dmmxz" podStartSLOduration=1.522886491 podStartE2EDuration="1.522886491s" podCreationTimestamp="2026-01-26 19:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:47.518878842 +0000 UTC m=+1132.083785574" watchObservedRunningTime="2026-01-26 19:00:47.522886491 +0000 UTC m=+1132.087793223" Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.472735 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.473125 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.478547 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.509689 4770 generic.go:334] "Generic (PLEG): container finished" podID="0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" containerID="1a7747b6b1e9668f53c62b6fda82b664cc40c65af97e3de8a0afa5897f46c891" exitCode=0 Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.509763 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf-config-dmmxz" event={"ID":"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb","Type":"ContainerDied","Data":"1a7747b6b1e9668f53c62b6fda82b664cc40c65af97e3de8a0afa5897f46c891"} Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.511390 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.674694 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dtkfn"] Jan 26 19:00:48 crc kubenswrapper[4770]: I0126 19:00:48.684570 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dtkfn"] Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.777968 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5496b748-8842-4e01-8f55-64a318d702af" path="/var/lib/kubelet/pods/5496b748-8842-4e01-8f55-64a318d702af/volumes" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.938440 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956643 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956718 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956837 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956875 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrnrm\" (UniqueName: \"kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run" (OuterVolumeSpecName: "var-run") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956907 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956932 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts\") pod \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\" (UID: \"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb\") " Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.956979 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957022 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957395 4770 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957413 4770 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957424 4770 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.957929 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts" (OuterVolumeSpecName: "scripts") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:49 crc kubenswrapper[4770]: I0126 19:00:49.965969 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm" (OuterVolumeSpecName: "kube-api-access-rrnrm") pod "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" (UID: "0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb"). InnerVolumeSpecName "kube-api-access-rrnrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.059114 4770 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.059149 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrnrm\" (UniqueName: \"kubernetes.io/projected/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-kube-api-access-rrnrm\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.059163 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.524671 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hgfvf-config-dmmxz" event={"ID":"0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb","Type":"ContainerDied","Data":"ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c"} Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.524770 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce92543c68da63801ff917143ef0fdbcd2cd397ede08a25fa2b54b8efdd0c43c" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.524714 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hgfvf-config-dmmxz" Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.616356 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hgfvf-config-dmmxz"] Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.623321 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hgfvf-config-dmmxz"] Jan 26 19:00:50 crc kubenswrapper[4770]: I0126 19:00:50.997084 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hgfvf" Jan 26 19:00:51 crc kubenswrapper[4770]: I0126 19:00:51.119208 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:51 crc kubenswrapper[4770]: I0126 19:00:51.531869 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="prometheus" containerID="cri-o://09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" gracePeriod=600 Jan 26 19:00:51 crc kubenswrapper[4770]: I0126 19:00:51.531953 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="thanos-sidecar" containerID="cri-o://1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" gracePeriod=600 Jan 26 19:00:51 crc kubenswrapper[4770]: I0126 19:00:51.531960 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="config-reloader" containerID="cri-o://dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" gracePeriod=600 Jan 26 19:00:51 crc kubenswrapper[4770]: I0126 19:00:51.777209 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" path="/var/lib/kubelet/pods/0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb/volumes" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.134106 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196009 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hxpr\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196381 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196567 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196623 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196674 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196914 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.196958 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file\") pod \"2d267c82-de7b-48b9-98f5-66d78067778d\" (UID: \"2d267c82-de7b-48b9-98f5-66d78067778d\") " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.197229 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.197340 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.197541 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.197609 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.197631 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.202848 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.206417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr" (OuterVolumeSpecName: "kube-api-access-7hxpr") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "kube-api-access-7hxpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.207726 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.207738 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out" (OuterVolumeSpecName: "config-out") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.210041 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config" (OuterVolumeSpecName: "config") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.222249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.237080 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config" (OuterVolumeSpecName: "web-config") pod "2d267c82-de7b-48b9-98f5-66d78067778d" (UID: "2d267c82-de7b-48b9-98f5-66d78067778d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299186 4770 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299264 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") on node \"crc\" " Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299280 4770 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299291 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hxpr\" (UniqueName: \"kubernetes.io/projected/2d267c82-de7b-48b9-98f5-66d78067778d-kube-api-access-7hxpr\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299302 4770 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2d267c82-de7b-48b9-98f5-66d78067778d-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299311 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2d267c82-de7b-48b9-98f5-66d78067778d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299320 4770 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.299328 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d267c82-de7b-48b9-98f5-66d78067778d-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.315820 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.315982 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91") on node "crc" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.400683 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553294 4770 generic.go:334] "Generic (PLEG): container finished" podID="2d267c82-de7b-48b9-98f5-66d78067778d" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" exitCode=0 Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553331 4770 generic.go:334] "Generic (PLEG): container finished" podID="2d267c82-de7b-48b9-98f5-66d78067778d" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" exitCode=0 Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553340 4770 generic.go:334] "Generic (PLEG): container finished" podID="2d267c82-de7b-48b9-98f5-66d78067778d" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" exitCode=0 Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553379 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553401 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerDied","Data":"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1"} Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553440 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerDied","Data":"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824"} Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerDied","Data":"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63"} Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553476 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2d267c82-de7b-48b9-98f5-66d78067778d","Type":"ContainerDied","Data":"3f9babc1954ccb6dbad9de9b64c33670d6d7aadf621204e444397014aac18fdb"} Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.553501 4770 scope.go:117] "RemoveContainer" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.555760 4770 generic.go:334] "Generic (PLEG): container finished" podID="ceb06b58-7f92-4704-909b-3c591476f04c" containerID="c43085ef5d72373f3e5fa208758821156c978518f9ab01b59dc8ef7e963164e5" exitCode=0 Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.555792 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vx59z" event={"ID":"ceb06b58-7f92-4704-909b-3c591476f04c","Type":"ContainerDied","Data":"c43085ef5d72373f3e5fa208758821156c978518f9ab01b59dc8ef7e963164e5"} Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.575661 4770 scope.go:117] "RemoveContainer" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.598119 4770 scope.go:117] "RemoveContainer" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.607279 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.616562 4770 scope.go:117] "RemoveContainer" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.638217 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.646546 4770 scope.go:117] "RemoveContainer" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.647048 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": container with ID starting with 1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1 not found: ID does not exist" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647099 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1"} err="failed to get container status \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": rpc error: code = NotFound desc = could not find container \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": container with ID starting with 1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647126 4770 scope.go:117] "RemoveContainer" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.647482 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": container with ID starting with dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824 not found: ID does not exist" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647515 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824"} err="failed to get container status \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": rpc error: code = NotFound desc = could not find container \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": container with ID starting with dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647536 4770 scope.go:117] "RemoveContainer" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.647802 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": container with ID starting with 09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63 not found: ID does not exist" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647832 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63"} err="failed to get container status \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": rpc error: code = NotFound desc = could not find container \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": container with ID starting with 09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.647850 4770 scope.go:117] "RemoveContainer" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.648126 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": container with ID starting with d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d not found: ID does not exist" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648154 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d"} err="failed to get container status \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": rpc error: code = NotFound desc = could not find container \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": container with ID starting with d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648172 4770 scope.go:117] "RemoveContainer" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648407 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1"} err="failed to get container status \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": rpc error: code = NotFound desc = could not find container \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": container with ID starting with 1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648431 4770 scope.go:117] "RemoveContainer" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648639 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824"} err="failed to get container status \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": rpc error: code = NotFound desc = could not find container \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": container with ID starting with dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648660 4770 scope.go:117] "RemoveContainer" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648896 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63"} err="failed to get container status \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": rpc error: code = NotFound desc = could not find container \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": container with ID starting with 09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.648920 4770 scope.go:117] "RemoveContainer" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649124 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d"} err="failed to get container status \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": rpc error: code = NotFound desc = could not find container \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": container with ID starting with d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649147 4770 scope.go:117] "RemoveContainer" containerID="1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649375 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1"} err="failed to get container status \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": rpc error: code = NotFound desc = could not find container \"1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1\": container with ID starting with 1e8a8aac51aece1838735aa174392bbd3041474d1d0c4d2e4608ff1430731ef1 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649396 4770 scope.go:117] "RemoveContainer" containerID="dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649606 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824"} err="failed to get container status \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": rpc error: code = NotFound desc = could not find container \"dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824\": container with ID starting with dbe6cfb4c27dc0f3895e15d86935ce4453415420ec4a3b836189eff161392824 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649627 4770 scope.go:117] "RemoveContainer" containerID="09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649868 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63"} err="failed to get container status \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": rpc error: code = NotFound desc = could not find container \"09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63\": container with ID starting with 09256c61875d3fd8c791fcb03c5dbdfa7c152757e62a6747225eaacba954fb63 not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.649891 4770 scope.go:117] "RemoveContainer" containerID="d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.650114 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d"} err="failed to get container status \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": rpc error: code = NotFound desc = could not find container \"d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d\": container with ID starting with d6c82b8335abcbdae31cf9e384c716821f8068e4ca499cdb56214f5b5de66c2d not found: ID does not exist" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.662130 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.662863 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="config-reloader" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.662887 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="config-reloader" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.662900 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="init-config-reloader" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.662909 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="init-config-reloader" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.662924 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="thanos-sidecar" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.662932 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="thanos-sidecar" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.662996 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="prometheus" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663005 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="prometheus" Jan 26 19:00:52 crc kubenswrapper[4770]: E0126 19:00:52.663021 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" containerName="ovn-config" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663030 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" containerName="ovn-config" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663229 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea8197d-5f5f-4b0b-8e6a-493e72d5cadb" containerName="ovn-config" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663251 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="thanos-sidecar" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663270 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="config-reloader" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.663282 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" containerName="prometheus" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.665153 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.670039 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.670735 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.671093 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.671247 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.671393 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.671720 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gqjgz" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.671863 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.672137 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.673075 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.679091 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708487 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708672 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708818 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwtqk\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708853 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708928 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.708976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709076 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709105 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709287 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.709330 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.809998 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810055 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810073 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810113 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810189 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810223 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810263 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810329 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810358 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.810390 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwtqk\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.812026 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.812955 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.813225 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.815203 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.815243 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bce0a61bb2b9f961be74694fe5f6cf0aff9e298c0837c7d91488158ec6fad94/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.815308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.815783 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.816166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.817229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.817915 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.818387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.827574 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwtqk\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.828036 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.829756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:52 crc kubenswrapper[4770]: I0126 19:00:52.862821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.013161 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.487746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:00:53 crc kubenswrapper[4770]: W0126 19:00:53.504492 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8f0de9_6829_4178_8fdb_647aeac4384d.slice/crio-f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18 WatchSource:0}: Error finding container f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18: Status 404 returned error can't find the container with id f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18 Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.569784 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerStarted","Data":"f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18"} Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.658426 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-57gj7"] Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.660307 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.671226 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-57gj7"] Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.679073 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.722045 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.722135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmf8c\" (UniqueName: \"kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.778284 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d267c82-de7b-48b9-98f5-66d78067778d" path="/var/lib/kubelet/pods/2d267c82-de7b-48b9-98f5-66d78067778d/volumes" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.823486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.823569 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmf8c\" (UniqueName: \"kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.824322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.840256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmf8c\" (UniqueName: \"kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c\") pod \"root-account-create-update-57gj7\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.920954 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.924658 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.924761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.924828 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.924888 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4x8g\" (UniqueName: \"kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.925750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.926592 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.929732 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g" (OuterVolumeSpecName: "kube-api-access-m4x8g") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "kube-api-access-m4x8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:53 crc kubenswrapper[4770]: I0126 19:00:53.932242 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.004462 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.028434 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.028495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.028546 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts\") pod \"ceb06b58-7f92-4704-909b-3c591476f04c\" (UID: \"ceb06b58-7f92-4704-909b-3c591476f04c\") " Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.029913 4770 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.029941 4770 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ceb06b58-7f92-4704-909b-3c591476f04c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.029953 4770 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.029966 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4x8g\" (UniqueName: \"kubernetes.io/projected/ceb06b58-7f92-4704-909b-3c591476f04c-kube-api-access-m4x8g\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.051477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.051507 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts" (OuterVolumeSpecName: "scripts") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.055133 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ceb06b58-7f92-4704-909b-3c591476f04c" (UID: "ceb06b58-7f92-4704-909b-3c591476f04c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.138445 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.138477 4770 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ceb06b58-7f92-4704-909b-3c591476f04c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.138491 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceb06b58-7f92-4704-909b-3c591476f04c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.384795 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-57gj7"] Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.577729 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-57gj7" event={"ID":"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2","Type":"ContainerStarted","Data":"0ec454fcba45b29b799b5e5e1b8b87758ecccaf468c7db65bc685faf08638293"} Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.578064 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-57gj7" event={"ID":"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2","Type":"ContainerStarted","Data":"84ff2b898fb01194d1299241c16eaac8b65ca0dde3d12eccba0250a5819181fd"} Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.579433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vx59z" event={"ID":"ceb06b58-7f92-4704-909b-3c591476f04c","Type":"ContainerDied","Data":"868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94"} Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.579454 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868476c68b830b210d731366f9367f661e552525705f7a991781f0f89be0ab94" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.579505 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vx59z" Jan 26 19:00:54 crc kubenswrapper[4770]: I0126 19:00:54.598926 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-57gj7" podStartSLOduration=1.59891001 podStartE2EDuration="1.59891001s" podCreationTimestamp="2026-01-26 19:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:00:54.590185081 +0000 UTC m=+1139.155091813" watchObservedRunningTime="2026-01-26 19:00:54.59891001 +0000 UTC m=+1139.163816742" Jan 26 19:00:55 crc kubenswrapper[4770]: I0126 19:00:55.598642 4770 generic.go:334] "Generic (PLEG): container finished" podID="d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" containerID="0ec454fcba45b29b799b5e5e1b8b87758ecccaf468c7db65bc685faf08638293" exitCode=0 Jan 26 19:00:55 crc kubenswrapper[4770]: I0126 19:00:55.598686 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-57gj7" event={"ID":"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2","Type":"ContainerDied","Data":"0ec454fcba45b29b799b5e5e1b8b87758ecccaf468c7db65bc685faf08638293"} Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.609502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerStarted","Data":"137ef6b5aa37f18a214456c1119563d2719b46867dddaeecc51a319ccfa30bbc"} Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.968870 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-57gj7" Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.990983 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts\") pod \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.991275 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmf8c\" (UniqueName: \"kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c\") pod \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\" (UID: \"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2\") " Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.991546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" (UID: "d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.992045 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:56 crc kubenswrapper[4770]: I0126 19:00:56.998308 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c" (OuterVolumeSpecName: "kube-api-access-zmf8c") pod "d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" (UID: "d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2"). InnerVolumeSpecName "kube-api-access-zmf8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:00:57 crc kubenswrapper[4770]: I0126 19:00:57.093337 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmf8c\" (UniqueName: \"kubernetes.io/projected/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2-kube-api-access-zmf8c\") on node \"crc\" DevicePath \"\"" Jan 26 19:00:57 crc kubenswrapper[4770]: I0126 19:00:57.618796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-57gj7" event={"ID":"d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2","Type":"ContainerDied","Data":"84ff2b898fb01194d1299241c16eaac8b65ca0dde3d12eccba0250a5819181fd"} Jan 26 19:00:57 crc kubenswrapper[4770]: I0126 19:00:57.619623 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ff2b898fb01194d1299241c16eaac8b65ca0dde3d12eccba0250a5819181fd" Jan 26 19:00:57 crc kubenswrapper[4770]: I0126 19:00:57.618860 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-57gj7" Jan 26 19:01:00 crc kubenswrapper[4770]: I0126 19:01:00.330807 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:01:00 crc kubenswrapper[4770]: I0126 19:01:00.331188 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:01:00 crc kubenswrapper[4770]: I0126 19:01:00.352264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:01:00 crc kubenswrapper[4770]: I0126 19:01:00.364854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3117c9b-d620-4686-afa7-315bbae0e328-etc-swift\") pod \"swift-storage-0\" (UID: \"f3117c9b-d620-4686-afa7-315bbae0e328\") " pod="openstack/swift-storage-0" Jan 26 19:01:00 crc kubenswrapper[4770]: I0126 19:01:00.492962 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 19:01:01 crc kubenswrapper[4770]: I0126 19:01:01.070291 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 19:01:01 crc kubenswrapper[4770]: I0126 19:01:01.518451 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 26 19:01:01 crc kubenswrapper[4770]: I0126 19:01:01.651084 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"5325df1f03320cb77a31ac69c1efe0989a32841b7d5eb65036b93377de8c7373"} Jan 26 19:01:01 crc kubenswrapper[4770]: I0126 19:01:01.815431 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 26 19:01:02 crc kubenswrapper[4770]: I0126 19:01:02.109239 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="7e3d608a-c9d7-4a29-b45a-0c175851fdbc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 26 19:01:02 crc kubenswrapper[4770]: I0126 19:01:02.661328 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"a37293f6714662b4eff1a54adc25df3a5a26900ed169fa79d749aa075b0fe08e"} Jan 26 19:01:02 crc kubenswrapper[4770]: I0126 19:01:02.661379 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"a77cfbb9a76226d3a7fc5a634e42d5d48335bc252416d4888f2a689875535858"} Jan 26 19:01:02 crc kubenswrapper[4770]: I0126 19:01:02.661393 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"acd2c56318264e090508abb33c200b5a90ff0589fab04fd5064955512642f3cd"} Jan 26 19:01:03 crc kubenswrapper[4770]: I0126 19:01:03.671561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"1258410e042dbe4481546688823493ddf561640e11aa9a14a0c8d6134694a419"} Jan 26 19:01:03 crc kubenswrapper[4770]: I0126 19:01:03.671955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"8156dea5c86528e9cc4260fb9acaea8b6b2f54e15df4552e9fe70801132c592d"} Jan 26 19:01:03 crc kubenswrapper[4770]: I0126 19:01:03.673247 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerID="137ef6b5aa37f18a214456c1119563d2719b46867dddaeecc51a319ccfa30bbc" exitCode=0 Jan 26 19:01:03 crc kubenswrapper[4770]: I0126 19:01:03.673297 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerDied","Data":"137ef6b5aa37f18a214456c1119563d2719b46867dddaeecc51a319ccfa30bbc"} Jan 26 19:01:04 crc kubenswrapper[4770]: I0126 19:01:04.684410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"30310c82f4c9835a2d5f85af67b65b883b28e534424bb914a96d5b12fa1fc75d"} Jan 26 19:01:04 crc kubenswrapper[4770]: I0126 19:01:04.684768 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"269616e12a26df1d596115135d8ac5eb43daf6bc5aeccce1e79bbc6b93f2f189"} Jan 26 19:01:04 crc kubenswrapper[4770]: I0126 19:01:04.684783 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"2747d84695705e651ed798c3a398ca7f61bb05d4329caa810fc1a00faf8361a0"} Jan 26 19:01:04 crc kubenswrapper[4770]: I0126 19:01:04.686667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerStarted","Data":"eb6562b20a3a132052e1a9d98a952dd191f4f26f39c4f4f69a8e376bbcc54e50"} Jan 26 19:01:05 crc kubenswrapper[4770]: I0126 19:01:05.700373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"d3407da9c83670d31afd7f41205e9438b68f48b4e448d23b1d8b2dfadb9af465"} Jan 26 19:01:05 crc kubenswrapper[4770]: I0126 19:01:05.700940 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"16b9740e3a102a9cca01987c70891e9822e0d260449bd9106aa967436e473364"} Jan 26 19:01:05 crc kubenswrapper[4770]: I0126 19:01:05.700956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"16ec579a182732fbcf81a541c670de9e667de31fc937dd72ea6b05a642d438f6"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.714065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerStarted","Data":"2ba749296766f922be27ee679648bf0d878341f7396138398596fe2c4b6c09c2"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.714516 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerStarted","Data":"a4d76a4495d70df695cd35c4c2377357b0f03c4e1b20fdcd7f9402bc7c642ac8"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.720904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"f1b02fd975378ca5a75d06128dadbc261d53eb9825a4dcd64349c2ede01c03ee"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.720955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"d4217349b416cbd4aedd9d4bdbb3e4a04120e89ae41e6c2385bb0c47775bf1c8"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.720970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"f8149743a1a570cbba8e46bb17c9720173ba6ec49ac1a5fbf993579059c67a9b"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.720983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f3117c9b-d620-4686-afa7-315bbae0e328","Type":"ContainerStarted","Data":"4e54ec38886b022f55e5cc0f1152543ab0ae30da73210269b0fcecc5f8991b2c"} Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.749116 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.749099239 podStartE2EDuration="14.749099239s" podCreationTimestamp="2026-01-26 19:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:06.747444624 +0000 UTC m=+1151.312351366" watchObservedRunningTime="2026-01-26 19:01:06.749099239 +0000 UTC m=+1151.314005971" Jan 26 19:01:06 crc kubenswrapper[4770]: I0126 19:01:06.791836 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.021964729 podStartE2EDuration="39.791816706s" podCreationTimestamp="2026-01-26 19:00:27 +0000 UTC" firstStartedPulling="2026-01-26 19:01:01.080370498 +0000 UTC m=+1145.645277230" lastFinishedPulling="2026-01-26 19:01:04.850222475 +0000 UTC m=+1149.415129207" observedRunningTime="2026-01-26 19:01:06.786776348 +0000 UTC m=+1151.351683090" watchObservedRunningTime="2026-01-26 19:01:06.791816706 +0000 UTC m=+1151.356723438" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.100488 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:01:07 crc kubenswrapper[4770]: E0126 19:01:07.101118 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" containerName="mariadb-account-create-update" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.101136 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" containerName="mariadb-account-create-update" Jan 26 19:01:07 crc kubenswrapper[4770]: E0126 19:01:07.101160 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb06b58-7f92-4704-909b-3c591476f04c" containerName="swift-ring-rebalance" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.101168 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb06b58-7f92-4704-909b-3c591476f04c" containerName="swift-ring-rebalance" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.101331 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" containerName="mariadb-account-create-update" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.101344 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb06b58-7f92-4704-909b-3c591476f04c" containerName="swift-ring-rebalance" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.102240 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.105032 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.108272 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.263608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.263656 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfnm\" (UniqueName: \"kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.263832 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.263951 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.264094 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.264154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365629 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365793 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365963 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.365999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfnm\" (UniqueName: \"kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.367352 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.367481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.367547 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.367551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.367904 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.391218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfnm\" (UniqueName: \"kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm\") pod \"dnsmasq-dns-6d87d859d9-ll7rh\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.419119 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:07 crc kubenswrapper[4770]: I0126 19:01:07.919986 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:01:07 crc kubenswrapper[4770]: W0126 19:01:07.952029 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2607908d_b3c2_41a1_b445_386aacb914f1.slice/crio-cbc16fe85b85cb698b3018aaeda366f25234d17702dd9b3597880431d8d8a799 WatchSource:0}: Error finding container cbc16fe85b85cb698b3018aaeda366f25234d17702dd9b3597880431d8d8a799: Status 404 returned error can't find the container with id cbc16fe85b85cb698b3018aaeda366f25234d17702dd9b3597880431d8d8a799 Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.013969 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.014040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.019578 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.740467 4770 generic.go:334] "Generic (PLEG): container finished" podID="2607908d-b3c2-41a1-b445-386aacb914f1" containerID="703a49ed8544395bb8dc435e829ca9d108a2da655c3ead922cbbab8d0528cfe9" exitCode=0 Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.740801 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" event={"ID":"2607908d-b3c2-41a1-b445-386aacb914f1","Type":"ContainerDied","Data":"703a49ed8544395bb8dc435e829ca9d108a2da655c3ead922cbbab8d0528cfe9"} Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.740893 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" event={"ID":"2607908d-b3c2-41a1-b445-386aacb914f1","Type":"ContainerStarted","Data":"cbc16fe85b85cb698b3018aaeda366f25234d17702dd9b3597880431d8d8a799"} Jan 26 19:01:08 crc kubenswrapper[4770]: I0126 19:01:08.746182 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 19:01:09 crc kubenswrapper[4770]: I0126 19:01:09.752213 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" event={"ID":"2607908d-b3c2-41a1-b445-386aacb914f1","Type":"ContainerStarted","Data":"f741360c676c664ce9827e53e3f6fcc77e91d052b633fb159a0361adf506b1f4"} Jan 26 19:01:09 crc kubenswrapper[4770]: I0126 19:01:09.774994 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podStartSLOduration=2.774979299 podStartE2EDuration="2.774979299s" podCreationTimestamp="2026-01-26 19:01:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:09.770693553 +0000 UTC m=+1154.335600325" watchObservedRunningTime="2026-01-26 19:01:09.774979299 +0000 UTC m=+1154.339886031" Jan 26 19:01:10 crc kubenswrapper[4770]: I0126 19:01:10.760723 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.520010 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.815618 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.821368 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mnwcn"] Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.822377 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.856551 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mnwcn"] Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.942106 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:11 crc kubenswrapper[4770]: I0126 19:01:11.942301 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hnb2\" (UniqueName: \"kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.017635 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5bnp2"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.019075 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.024864 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d8e4-account-create-update-lxtmh"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.026194 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.028436 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.033548 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5bnp2"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.042831 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d8e4-account-create-update-lxtmh"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.043691 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hnb2\" (UniqueName: \"kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.043863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.044783 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.110550 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hnb2\" (UniqueName: \"kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2\") pod \"barbican-db-create-mnwcn\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.116823 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.141161 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.145844 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwcn\" (UniqueName: \"kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.145896 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.145926 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmt7w\" (UniqueName: \"kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.145942 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.246299 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1c22-account-create-update-n8c27"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.247405 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.248931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwcn\" (UniqueName: \"kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.248966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.249000 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmt7w\" (UniqueName: \"kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.249016 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.249862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.250004 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.250329 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.256804 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1c22-account-create-update-n8c27"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.290784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmt7w\" (UniqueName: \"kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w\") pod \"barbican-d8e4-account-create-update-lxtmh\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.292882 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x5wgl"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.295180 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.298030 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.298176 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.298307 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hkvsm" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.298422 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.300083 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwcn\" (UniqueName: \"kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn\") pod \"cinder-db-create-5bnp2\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.322417 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x5wgl"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.336162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.350051 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6thqc\" (UniqueName: \"kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.350110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsqm\" (UniqueName: \"kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.350162 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.350191 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.350225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.356638 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.453766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.453897 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.453962 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.454053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6thqc\" (UniqueName: \"kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.454093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsqm\" (UniqueName: \"kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.458736 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.480638 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.496306 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.512811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsqm\" (UniqueName: \"kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm\") pod \"cinder-1c22-account-create-update-n8c27\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.517292 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6thqc\" (UniqueName: \"kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc\") pod \"keystone-db-sync-x5wgl\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.597039 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.678470 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.920828 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5bnp2"] Jan 26 19:01:12 crc kubenswrapper[4770]: I0126 19:01:12.928478 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mnwcn"] Jan 26 19:01:13 crc kubenswrapper[4770]: W0126 19:01:13.088800 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2193ed97_12f7_437a_a441_222e00b8831d.slice/crio-1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50 WatchSource:0}: Error finding container 1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50: Status 404 returned error can't find the container with id 1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50 Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.091581 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x5wgl"] Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.142432 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d8e4-account-create-update-lxtmh"] Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.227906 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1c22-account-create-update-n8c27"] Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.787379 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5wgl" event={"ID":"2193ed97-12f7-437a-a441-222e00b8831d","Type":"ContainerStarted","Data":"1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.789409 4770 generic.go:334] "Generic (PLEG): container finished" podID="d97d19ba-991c-40e1-85cb-fd0402872336" containerID="3bb67bb8f1adc12df75be8ad65f1124d23dea1c26ed852be48c3eaa5da788164" exitCode=0 Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.789518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bnp2" event={"ID":"d97d19ba-991c-40e1-85cb-fd0402872336","Type":"ContainerDied","Data":"3bb67bb8f1adc12df75be8ad65f1124d23dea1c26ed852be48c3eaa5da788164"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.789546 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bnp2" event={"ID":"d97d19ba-991c-40e1-85cb-fd0402872336","Type":"ContainerStarted","Data":"01602ce9a010ee171cb5efbd0e812cc3a8003901a22fb365e2af808c600797a6"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.791305 4770 generic.go:334] "Generic (PLEG): container finished" podID="9e9f9f5b-1111-4f22-abe2-7146071528f9" containerID="180cd83722999e0d55775e13dfb4d83e5b178ea5b1e89829ff123d1ef269f8c3" exitCode=0 Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.791383 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mnwcn" event={"ID":"9e9f9f5b-1111-4f22-abe2-7146071528f9","Type":"ContainerDied","Data":"180cd83722999e0d55775e13dfb4d83e5b178ea5b1e89829ff123d1ef269f8c3"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.791407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mnwcn" event={"ID":"9e9f9f5b-1111-4f22-abe2-7146071528f9","Type":"ContainerStarted","Data":"1b1dcaef2d03feb3ca934cfd028ef35b26b348a38ba6bf2718e4752c0fc62afd"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.792878 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1c22-account-create-update-n8c27" event={"ID":"9f4ddc80-a3e0-4ef0-930f-e8778893071b","Type":"ContainerStarted","Data":"1b515a6e4f6ab7b9dad1d7541eb26dbebd67769fc01dea35e360b2137d5b83e5"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.792901 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1c22-account-create-update-n8c27" event={"ID":"9f4ddc80-a3e0-4ef0-930f-e8778893071b","Type":"ContainerStarted","Data":"6df026accb5685d791ab09796d07521c03b62866f6bc3bc0f5d142262f3dde25"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.794538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d8e4-account-create-update-lxtmh" event={"ID":"27c424d7-fc72-42a8-a2f4-206786467a86","Type":"ContainerStarted","Data":"352549a81e61f1491bfce3b8bbcf817ece455aafeee464822150a9915376d5a3"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.794561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d8e4-account-create-update-lxtmh" event={"ID":"27c424d7-fc72-42a8-a2f4-206786467a86","Type":"ContainerStarted","Data":"0823d40e1aa7cf87a63fa70dfa32ab4c918065c7c148bff827e620f2cb1dfbe5"} Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.850966 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1c22-account-create-update-n8c27" podStartSLOduration=1.850947923 podStartE2EDuration="1.850947923s" podCreationTimestamp="2026-01-26 19:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:13.842968435 +0000 UTC m=+1158.407875167" watchObservedRunningTime="2026-01-26 19:01:13.850947923 +0000 UTC m=+1158.415854645" Jan 26 19:01:13 crc kubenswrapper[4770]: I0126 19:01:13.863142 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d8e4-account-create-update-lxtmh" podStartSLOduration=2.863119845 podStartE2EDuration="2.863119845s" podCreationTimestamp="2026-01-26 19:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:13.862293713 +0000 UTC m=+1158.427200445" watchObservedRunningTime="2026-01-26 19:01:13.863119845 +0000 UTC m=+1158.428026577" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.770964 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-bx5vx"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.772250 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.775080 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.775533 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-k7ndx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.781370 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bx5vx"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.811903 4770 generic.go:334] "Generic (PLEG): container finished" podID="9f4ddc80-a3e0-4ef0-930f-e8778893071b" containerID="1b515a6e4f6ab7b9dad1d7541eb26dbebd67769fc01dea35e360b2137d5b83e5" exitCode=0 Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.812394 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1c22-account-create-update-n8c27" event={"ID":"9f4ddc80-a3e0-4ef0-930f-e8778893071b","Type":"ContainerDied","Data":"1b515a6e4f6ab7b9dad1d7541eb26dbebd67769fc01dea35e360b2137d5b83e5"} Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.814148 4770 generic.go:334] "Generic (PLEG): container finished" podID="27c424d7-fc72-42a8-a2f4-206786467a86" containerID="352549a81e61f1491bfce3b8bbcf817ece455aafeee464822150a9915376d5a3" exitCode=0 Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.814285 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d8e4-account-create-update-lxtmh" event={"ID":"27c424d7-fc72-42a8-a2f4-206786467a86","Type":"ContainerDied","Data":"352549a81e61f1491bfce3b8bbcf817ece455aafeee464822150a9915376d5a3"} Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.868737 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l8r5x"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.870298 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.877390 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l8r5x"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.921526 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.922045 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.922188 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9lm6\" (UniqueName: \"kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.922430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.948149 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-68ff-account-create-update-64vn4"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.949401 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.965001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-68ff-account-create-update-64vn4"] Jan 26 19:01:14 crc kubenswrapper[4770]: I0126 19:01:14.982045 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024625 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024706 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9lm6\" (UniqueName: \"kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024735 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zkv\" (UniqueName: \"kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024893 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.024932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.025099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgt9\" (UniqueName: \"kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.025144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.034635 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.039058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.043690 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.044002 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9lm6\" (UniqueName: \"kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6\") pod \"watcher-db-sync-bx5vx\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.091603 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.128672 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgt9\" (UniqueName: \"kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.128734 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.128786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zkv\" (UniqueName: \"kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.128817 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.129534 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.129814 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.152768 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bxwvd"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.153032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgt9\" (UniqueName: \"kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9\") pod \"glance-68ff-account-create-update-64vn4\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.154149 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.159349 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-25af-account-create-update-vx8h2"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.160497 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.164961 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zkv\" (UniqueName: \"kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv\") pod \"glance-db-create-l8r5x\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.165201 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.176555 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-25af-account-create-update-vx8h2"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.188077 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bxwvd"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.189789 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.231668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtckf\" (UniqueName: \"kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.231759 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.303850 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.325066 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.348259 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtckf\" (UniqueName: \"kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.348324 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.348368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.348401 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz8r\" (UniqueName: \"kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.349048 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.357270 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.375522 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtckf\" (UniqueName: \"kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf\") pod \"neutron-db-create-bxwvd\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.449851 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts\") pod \"9e9f9f5b-1111-4f22-abe2-7146071528f9\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.449924 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hnb2\" (UniqueName: \"kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2\") pod \"9e9f9f5b-1111-4f22-abe2-7146071528f9\" (UID: \"9e9f9f5b-1111-4f22-abe2-7146071528f9\") " Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.450013 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts\") pod \"d97d19ba-991c-40e1-85cb-fd0402872336\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.450090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctwcn\" (UniqueName: \"kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn\") pod \"d97d19ba-991c-40e1-85cb-fd0402872336\" (UID: \"d97d19ba-991c-40e1-85cb-fd0402872336\") " Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.450514 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz8r\" (UniqueName: \"kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.450723 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.451812 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e9f9f5b-1111-4f22-abe2-7146071528f9" (UID: "9e9f9f5b-1111-4f22-abe2-7146071528f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.452419 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d97d19ba-991c-40e1-85cb-fd0402872336" (UID: "d97d19ba-991c-40e1-85cb-fd0402872336"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.455121 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.455974 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2" (OuterVolumeSpecName: "kube-api-access-9hnb2") pod "9e9f9f5b-1111-4f22-abe2-7146071528f9" (UID: "9e9f9f5b-1111-4f22-abe2-7146071528f9"). InnerVolumeSpecName "kube-api-access-9hnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.465502 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn" (OuterVolumeSpecName: "kube-api-access-ctwcn") pod "d97d19ba-991c-40e1-85cb-fd0402872336" (UID: "d97d19ba-991c-40e1-85cb-fd0402872336"). InnerVolumeSpecName "kube-api-access-ctwcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.468230 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz8r\" (UniqueName: \"kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r\") pod \"neutron-25af-account-create-update-vx8h2\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.552010 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e9f9f5b-1111-4f22-abe2-7146071528f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.552043 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hnb2\" (UniqueName: \"kubernetes.io/projected/9e9f9f5b-1111-4f22-abe2-7146071528f9-kube-api-access-9hnb2\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.552053 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d97d19ba-991c-40e1-85cb-fd0402872336-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.552062 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctwcn\" (UniqueName: \"kubernetes.io/projected/d97d19ba-991c-40e1-85cb-fd0402872336-kube-api-access-ctwcn\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.617491 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.622264 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.683723 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bx5vx"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.822820 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l8r5x"] Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.827633 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5bnp2" event={"ID":"d97d19ba-991c-40e1-85cb-fd0402872336","Type":"ContainerDied","Data":"01602ce9a010ee171cb5efbd0e812cc3a8003901a22fb365e2af808c600797a6"} Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.827669 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01602ce9a010ee171cb5efbd0e812cc3a8003901a22fb365e2af808c600797a6" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.827714 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5bnp2" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.830166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mnwcn" event={"ID":"9e9f9f5b-1111-4f22-abe2-7146071528f9","Type":"ContainerDied","Data":"1b1dcaef2d03feb3ca934cfd028ef35b26b348a38ba6bf2718e4752c0fc62afd"} Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.830202 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b1dcaef2d03feb3ca934cfd028ef35b26b348a38ba6bf2718e4752c0fc62afd" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.830259 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mnwcn" Jan 26 19:01:15 crc kubenswrapper[4770]: I0126 19:01:15.880105 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-68ff-account-create-update-64vn4"] Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.420881 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.483162 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.483419 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="dnsmasq-dns" containerID="cri-o://80c4f68ee7592030704ae039b3bdd047dce73c02995cb5e62e76a7af4b1d529b" gracePeriod=10 Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.548493 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.847641 4770 generic.go:334] "Generic (PLEG): container finished" podID="f78baf61-9a55-4017-a0fe-90336e976053" containerID="80c4f68ee7592030704ae039b3bdd047dce73c02995cb5e62e76a7af4b1d529b" exitCode=0 Jan 26 19:01:17 crc kubenswrapper[4770]: I0126 19:01:17.847690 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" event={"ID":"f78baf61-9a55-4017-a0fe-90336e976053","Type":"ContainerDied","Data":"80c4f68ee7592030704ae039b3bdd047dce73c02995cb5e62e76a7af4b1d529b"} Jan 26 19:01:18 crc kubenswrapper[4770]: W0126 19:01:18.340032 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19ec737_f43c_4c4d_b6b0_16b535709eb6.slice/crio-741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e WatchSource:0}: Error finding container 741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e: Status 404 returned error can't find the container with id 741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e Jan 26 19:01:18 crc kubenswrapper[4770]: W0126 19:01:18.345883 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bf6c96_e816_4d9c_890e_e347005628ec.slice/crio-0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b WatchSource:0}: Error finding container 0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b: Status 404 returned error can't find the container with id 0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.650812 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.681325 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.714442 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.824362 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb\") pod \"f78baf61-9a55-4017-a0fe-90336e976053\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.824768 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqt8l\" (UniqueName: \"kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l\") pod \"f78baf61-9a55-4017-a0fe-90336e976053\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.824809 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config\") pod \"f78baf61-9a55-4017-a0fe-90336e976053\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.824839 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb\") pod \"f78baf61-9a55-4017-a0fe-90336e976053\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.826385 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts\") pod \"27c424d7-fc72-42a8-a2f4-206786467a86\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.826429 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfsqm\" (UniqueName: \"kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm\") pod \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.826448 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts\") pod \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\" (UID: \"9f4ddc80-a3e0-4ef0-930f-e8778893071b\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.826501 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc\") pod \"f78baf61-9a55-4017-a0fe-90336e976053\" (UID: \"f78baf61-9a55-4017-a0fe-90336e976053\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.826585 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmt7w\" (UniqueName: \"kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w\") pod \"27c424d7-fc72-42a8-a2f4-206786467a86\" (UID: \"27c424d7-fc72-42a8-a2f4-206786467a86\") " Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.827947 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f4ddc80-a3e0-4ef0-930f-e8778893071b" (UID: "9f4ddc80-a3e0-4ef0-930f-e8778893071b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.830870 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w" (OuterVolumeSpecName: "kube-api-access-kmt7w") pod "27c424d7-fc72-42a8-a2f4-206786467a86" (UID: "27c424d7-fc72-42a8-a2f4-206786467a86"). InnerVolumeSpecName "kube-api-access-kmt7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.831067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c424d7-fc72-42a8-a2f4-206786467a86" (UID: "27c424d7-fc72-42a8-a2f4-206786467a86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.831537 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l" (OuterVolumeSpecName: "kube-api-access-dqt8l") pod "f78baf61-9a55-4017-a0fe-90336e976053" (UID: "f78baf61-9a55-4017-a0fe-90336e976053"). InnerVolumeSpecName "kube-api-access-dqt8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.832780 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm" (OuterVolumeSpecName: "kube-api-access-kfsqm") pod "9f4ddc80-a3e0-4ef0-930f-e8778893071b" (UID: "9f4ddc80-a3e0-4ef0-930f-e8778893071b"). InnerVolumeSpecName "kube-api-access-kfsqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.860804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68ff-account-create-update-64vn4" event={"ID":"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05","Type":"ContainerStarted","Data":"c45c29078f09f784e7754742eb78abadb6198c8fc65d3f32e0f7747356f6ac4c"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.864655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l8r5x" event={"ID":"e6bf6c96-e816-4d9c-890e-e347005628ec","Type":"ContainerStarted","Data":"0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.867290 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bx5vx" event={"ID":"e19ec737-f43c-4c4d-b6b0-16b535709eb6","Type":"ContainerStarted","Data":"741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.873011 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.873078 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c6ff9699-8rfv9" event={"ID":"f78baf61-9a55-4017-a0fe-90336e976053","Type":"ContainerDied","Data":"a2ba70c76ddbef9d73b4a10e0c16a5dc5ee364b0bd160c3c301fd9ef29de34f8"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.873135 4770 scope.go:117] "RemoveContainer" containerID="80c4f68ee7592030704ae039b3bdd047dce73c02995cb5e62e76a7af4b1d529b" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.874485 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f78baf61-9a55-4017-a0fe-90336e976053" (UID: "f78baf61-9a55-4017-a0fe-90336e976053"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.877408 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1c22-account-create-update-n8c27" event={"ID":"9f4ddc80-a3e0-4ef0-930f-e8778893071b","Type":"ContainerDied","Data":"6df026accb5685d791ab09796d07521c03b62866f6bc3bc0f5d142262f3dde25"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.877455 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df026accb5685d791ab09796d07521c03b62866f6bc3bc0f5d142262f3dde25" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.877432 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1c22-account-create-update-n8c27" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.880027 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d8e4-account-create-update-lxtmh" event={"ID":"27c424d7-fc72-42a8-a2f4-206786467a86","Type":"ContainerDied","Data":"0823d40e1aa7cf87a63fa70dfa32ab4c918065c7c148bff827e620f2cb1dfbe5"} Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.880076 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0823d40e1aa7cf87a63fa70dfa32ab4c918065c7c148bff827e620f2cb1dfbe5" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.880143 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d8e4-account-create-update-lxtmh" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.881487 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f78baf61-9a55-4017-a0fe-90336e976053" (UID: "f78baf61-9a55-4017-a0fe-90336e976053"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.893596 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config" (OuterVolumeSpecName: "config") pod "f78baf61-9a55-4017-a0fe-90336e976053" (UID: "f78baf61-9a55-4017-a0fe-90336e976053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.900505 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f78baf61-9a55-4017-a0fe-90336e976053" (UID: "f78baf61-9a55-4017-a0fe-90336e976053"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.913842 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bxwvd"] Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929314 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c424d7-fc72-42a8-a2f4-206786467a86-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929348 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfsqm\" (UniqueName: \"kubernetes.io/projected/9f4ddc80-a3e0-4ef0-930f-e8778893071b-kube-api-access-kfsqm\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929433 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f4ddc80-a3e0-4ef0-930f-e8778893071b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929452 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929462 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmt7w\" (UniqueName: \"kubernetes.io/projected/27c424d7-fc72-42a8-a2f4-206786467a86-kube-api-access-kmt7w\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929472 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929480 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqt8l\" (UniqueName: \"kubernetes.io/projected/f78baf61-9a55-4017-a0fe-90336e976053-kube-api-access-dqt8l\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929488 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.929497 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f78baf61-9a55-4017-a0fe-90336e976053-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:18 crc kubenswrapper[4770]: I0126 19:01:18.981356 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-25af-account-create-update-vx8h2"] Jan 26 19:01:19 crc kubenswrapper[4770]: I0126 19:01:19.208086 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:01:19 crc kubenswrapper[4770]: I0126 19:01:19.218077 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c6ff9699-8rfv9"] Jan 26 19:01:19 crc kubenswrapper[4770]: I0126 19:01:19.782293 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78baf61-9a55-4017-a0fe-90336e976053" path="/var/lib/kubelet/pods/f78baf61-9a55-4017-a0fe-90336e976053/volumes" Jan 26 19:01:19 crc kubenswrapper[4770]: I0126 19:01:19.910131 4770 scope.go:117] "RemoveContainer" containerID="dc58b42d81f706a7166ec7cb9b5d32aec628b4839aa6a7d6199ab82b14e380c6" Jan 26 19:01:19 crc kubenswrapper[4770]: W0126 19:01:19.915726 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18b03d7_9247_4f08_b476_558e77605786.slice/crio-b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401 WatchSource:0}: Error finding container b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401: Status 404 returned error can't find the container with id b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401 Jan 26 19:01:19 crc kubenswrapper[4770]: W0126 19:01:19.916671 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62c35c81_2111_46fb_b0c8_4e426d1d32f9.slice/crio-6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b WatchSource:0}: Error finding container 6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b: Status 404 returned error can't find the container with id 6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.900003 4770 generic.go:334] "Generic (PLEG): container finished" podID="62c35c81-2111-46fb-b0c8-4e426d1d32f9" containerID="0e031a1c6fccaa4c836ce83f8de7a029ed38386b5518de4a7bc02052f72b9103" exitCode=0 Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.900373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-25af-account-create-update-vx8h2" event={"ID":"62c35c81-2111-46fb-b0c8-4e426d1d32f9","Type":"ContainerDied","Data":"0e031a1c6fccaa4c836ce83f8de7a029ed38386b5518de4a7bc02052f72b9103"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.900401 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-25af-account-create-update-vx8h2" event={"ID":"62c35c81-2111-46fb-b0c8-4e426d1d32f9","Type":"ContainerStarted","Data":"6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.903723 4770 generic.go:334] "Generic (PLEG): container finished" podID="670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" containerID="e41a04982fb7f75389f695341d845e09bae5d2322f94855777d1671d61b45686" exitCode=0 Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.903813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68ff-account-create-update-64vn4" event={"ID":"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05","Type":"ContainerDied","Data":"e41a04982fb7f75389f695341d845e09bae5d2322f94855777d1671d61b45686"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.909764 4770 generic.go:334] "Generic (PLEG): container finished" podID="b18b03d7-9247-4f08-b476-558e77605786" containerID="23eaebba4d4226fc37a80bb3c73d87808cfac59bd0ef2aca4607998853b39bc4" exitCode=0 Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.909896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bxwvd" event={"ID":"b18b03d7-9247-4f08-b476-558e77605786","Type":"ContainerDied","Data":"23eaebba4d4226fc37a80bb3c73d87808cfac59bd0ef2aca4607998853b39bc4"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.909924 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bxwvd" event={"ID":"b18b03d7-9247-4f08-b476-558e77605786","Type":"ContainerStarted","Data":"b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.920418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5wgl" event={"ID":"2193ed97-12f7-437a-a441-222e00b8831d","Type":"ContainerStarted","Data":"082c7207f1ea69779310718fa94a039b463e1c03c684577c0db15b9cb0f1b6cc"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.925732 4770 generic.go:334] "Generic (PLEG): container finished" podID="e6bf6c96-e816-4d9c-890e-e347005628ec" containerID="b6f9bcf8b839ee9eb8f0eaacd1a725c981257abebe71bb6520c368768407b98e" exitCode=0 Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.925839 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l8r5x" event={"ID":"e6bf6c96-e816-4d9c-890e-e347005628ec","Type":"ContainerDied","Data":"b6f9bcf8b839ee9eb8f0eaacd1a725c981257abebe71bb6520c368768407b98e"} Jan 26 19:01:20 crc kubenswrapper[4770]: I0126 19:01:20.959861 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x5wgl" podStartSLOduration=3.608522871 podStartE2EDuration="8.959843939s" podCreationTimestamp="2026-01-26 19:01:12 +0000 UTC" firstStartedPulling="2026-01-26 19:01:13.091126462 +0000 UTC m=+1157.656033194" lastFinishedPulling="2026-01-26 19:01:18.44244753 +0000 UTC m=+1163.007354262" observedRunningTime="2026-01-26 19:01:20.957451833 +0000 UTC m=+1165.522358575" watchObservedRunningTime="2026-01-26 19:01:20.959843939 +0000 UTC m=+1165.524750671" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.682457 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.692143 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.733188 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.740210 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.754371 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts\") pod \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.754764 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qz8r\" (UniqueName: \"kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r\") pod \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\" (UID: \"62c35c81-2111-46fb-b0c8-4e426d1d32f9\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.758667 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62c35c81-2111-46fb-b0c8-4e426d1d32f9" (UID: "62c35c81-2111-46fb-b0c8-4e426d1d32f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.761975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r" (OuterVolumeSpecName: "kube-api-access-7qz8r") pod "62c35c81-2111-46fb-b0c8-4e426d1d32f9" (UID: "62c35c81-2111-46fb-b0c8-4e426d1d32f9"). InnerVolumeSpecName "kube-api-access-7qz8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.856745 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts\") pod \"e6bf6c96-e816-4d9c-890e-e347005628ec\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.856907 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgt9\" (UniqueName: \"kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9\") pod \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857055 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtckf\" (UniqueName: \"kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf\") pod \"b18b03d7-9247-4f08-b476-558e77605786\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857083 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts\") pod \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\" (UID: \"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts\") pod \"b18b03d7-9247-4f08-b476-558e77605786\" (UID: \"b18b03d7-9247-4f08-b476-558e77605786\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857147 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6bf6c96-e816-4d9c-890e-e347005628ec" (UID: "e6bf6c96-e816-4d9c-890e-e347005628ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857237 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4zkv\" (UniqueName: \"kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv\") pod \"e6bf6c96-e816-4d9c-890e-e347005628ec\" (UID: \"e6bf6c96-e816-4d9c-890e-e347005628ec\") " Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.860859 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" (UID: "670a14aa-6ae2-42a1-8ab2-c0b13d56cb05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.861311 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf" (OuterVolumeSpecName: "kube-api-access-wtckf") pod "b18b03d7-9247-4f08-b476-558e77605786" (UID: "b18b03d7-9247-4f08-b476-558e77605786"). InnerVolumeSpecName "kube-api-access-wtckf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.861463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv" (OuterVolumeSpecName: "kube-api-access-w4zkv") pod "e6bf6c96-e816-4d9c-890e-e347005628ec" (UID: "e6bf6c96-e816-4d9c-890e-e347005628ec"). InnerVolumeSpecName "kube-api-access-w4zkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.857884 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b18b03d7-9247-4f08-b476-558e77605786" (UID: "b18b03d7-9247-4f08-b476-558e77605786"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.861662 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62c35c81-2111-46fb-b0c8-4e426d1d32f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.861879 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4zkv\" (UniqueName: \"kubernetes.io/projected/e6bf6c96-e816-4d9c-890e-e347005628ec-kube-api-access-w4zkv\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.861970 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6bf6c96-e816-4d9c-890e-e347005628ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.862058 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qz8r\" (UniqueName: \"kubernetes.io/projected/62c35c81-2111-46fb-b0c8-4e426d1d32f9-kube-api-access-7qz8r\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.862148 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtckf\" (UniqueName: \"kubernetes.io/projected/b18b03d7-9247-4f08-b476-558e77605786-kube-api-access-wtckf\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.862238 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.863114 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9" (OuterVolumeSpecName: "kube-api-access-bsgt9") pod "670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" (UID: "670a14aa-6ae2-42a1-8ab2-c0b13d56cb05"). InnerVolumeSpecName "kube-api-access-bsgt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.964145 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgt9\" (UniqueName: \"kubernetes.io/projected/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05-kube-api-access-bsgt9\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.964187 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b18b03d7-9247-4f08-b476-558e77605786-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.987579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-25af-account-create-update-vx8h2" event={"ID":"62c35c81-2111-46fb-b0c8-4e426d1d32f9","Type":"ContainerDied","Data":"6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b"} Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.987630 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1171cfdae8af9d88ea002092bc72c5c61a1787a94b39de3bb827c85720484b" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.987690 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-25af-account-create-update-vx8h2" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.994400 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-68ff-account-create-update-64vn4" Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.994393 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-68ff-account-create-update-64vn4" event={"ID":"670a14aa-6ae2-42a1-8ab2-c0b13d56cb05","Type":"ContainerDied","Data":"c45c29078f09f784e7754742eb78abadb6198c8fc65d3f32e0f7747356f6ac4c"} Jan 26 19:01:25 crc kubenswrapper[4770]: I0126 19:01:25.994457 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45c29078f09f784e7754742eb78abadb6198c8fc65d3f32e0f7747356f6ac4c" Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.000749 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bxwvd" event={"ID":"b18b03d7-9247-4f08-b476-558e77605786","Type":"ContainerDied","Data":"b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401"} Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.000801 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1336a568dd3de743ef37ae23b1c1bd8b655be083ce6ad2e2e0a33a9b026d401" Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.000766 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bxwvd" Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.004060 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l8r5x" event={"ID":"e6bf6c96-e816-4d9c-890e-e347005628ec","Type":"ContainerDied","Data":"0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b"} Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.004108 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9bfbf6a520cce1991049c422c63e607a22984f8305bcdcafcf617ae99e624b" Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.004070 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l8r5x" Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.006953 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bx5vx" event={"ID":"e19ec737-f43c-4c4d-b6b0-16b535709eb6","Type":"ContainerStarted","Data":"a40087c67e1c636108224e2070760b990493387ddd33e393e5b7ff0dd586c058"} Jan 26 19:01:26 crc kubenswrapper[4770]: I0126 19:01:26.033941 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-bx5vx" podStartSLOduration=4.82017767 podStartE2EDuration="12.033919578s" podCreationTimestamp="2026-01-26 19:01:14 +0000 UTC" firstStartedPulling="2026-01-26 19:01:18.348430723 +0000 UTC m=+1162.913337455" lastFinishedPulling="2026-01-26 19:01:25.562172621 +0000 UTC m=+1170.127079363" observedRunningTime="2026-01-26 19:01:26.028953962 +0000 UTC m=+1170.593860714" watchObservedRunningTime="2026-01-26 19:01:26.033919578 +0000 UTC m=+1170.598826350" Jan 26 19:01:27 crc kubenswrapper[4770]: I0126 19:01:27.019831 4770 generic.go:334] "Generic (PLEG): container finished" podID="2193ed97-12f7-437a-a441-222e00b8831d" containerID="082c7207f1ea69779310718fa94a039b463e1c03c684577c0db15b9cb0f1b6cc" exitCode=0 Jan 26 19:01:27 crc kubenswrapper[4770]: I0126 19:01:27.019949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5wgl" event={"ID":"2193ed97-12f7-437a-a441-222e00b8831d","Type":"ContainerDied","Data":"082c7207f1ea69779310718fa94a039b463e1c03c684577c0db15b9cb0f1b6cc"} Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.349936 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.511685 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle\") pod \"2193ed97-12f7-437a-a441-222e00b8831d\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.511755 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data\") pod \"2193ed97-12f7-437a-a441-222e00b8831d\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.511780 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6thqc\" (UniqueName: \"kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc\") pod \"2193ed97-12f7-437a-a441-222e00b8831d\" (UID: \"2193ed97-12f7-437a-a441-222e00b8831d\") " Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.517070 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc" (OuterVolumeSpecName: "kube-api-access-6thqc") pod "2193ed97-12f7-437a-a441-222e00b8831d" (UID: "2193ed97-12f7-437a-a441-222e00b8831d"). InnerVolumeSpecName "kube-api-access-6thqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.556762 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2193ed97-12f7-437a-a441-222e00b8831d" (UID: "2193ed97-12f7-437a-a441-222e00b8831d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.579639 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data" (OuterVolumeSpecName: "config-data") pod "2193ed97-12f7-437a-a441-222e00b8831d" (UID: "2193ed97-12f7-437a-a441-222e00b8831d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.613106 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.613143 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2193ed97-12f7-437a-a441-222e00b8831d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:28 crc kubenswrapper[4770]: I0126 19:01:28.613153 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6thqc\" (UniqueName: \"kubernetes.io/projected/2193ed97-12f7-437a-a441-222e00b8831d-kube-api-access-6thqc\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.041206 4770 generic.go:334] "Generic (PLEG): container finished" podID="e19ec737-f43c-4c4d-b6b0-16b535709eb6" containerID="a40087c67e1c636108224e2070760b990493387ddd33e393e5b7ff0dd586c058" exitCode=0 Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.041305 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bx5vx" event={"ID":"e19ec737-f43c-4c4d-b6b0-16b535709eb6","Type":"ContainerDied","Data":"a40087c67e1c636108224e2070760b990493387ddd33e393e5b7ff0dd586c058"} Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.044238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x5wgl" event={"ID":"2193ed97-12f7-437a-a441-222e00b8831d","Type":"ContainerDied","Data":"1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50"} Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.044265 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e945675baa7fc499e5e201934232573b5af07920e29d081ec5a01ea7ec80f50" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.044301 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x5wgl" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.319851 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324345 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9f9f5b-1111-4f22-abe2-7146071528f9" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324379 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9f9f5b-1111-4f22-abe2-7146071528f9" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324391 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c35c81-2111-46fb-b0c8-4e426d1d32f9" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324397 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c35c81-2111-46fb-b0c8-4e426d1d32f9" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324410 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97d19ba-991c-40e1-85cb-fd0402872336" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324416 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97d19ba-991c-40e1-85cb-fd0402872336" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324424 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c424d7-fc72-42a8-a2f4-206786467a86" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324429 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c424d7-fc72-42a8-a2f4-206786467a86" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324450 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="init" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324458 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="init" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324466 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324474 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324489 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2193ed97-12f7-437a-a441-222e00b8831d" containerName="keystone-db-sync" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324497 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2193ed97-12f7-437a-a441-222e00b8831d" containerName="keystone-db-sync" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324520 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="dnsmasq-dns" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324527 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="dnsmasq-dns" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324538 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bf6c96-e816-4d9c-890e-e347005628ec" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324545 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bf6c96-e816-4d9c-890e-e347005628ec" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324559 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f4ddc80-a3e0-4ef0-930f-e8778893071b" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324566 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f4ddc80-a3e0-4ef0-930f-e8778893071b" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: E0126 19:01:29.324581 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18b03d7-9247-4f08-b476-558e77605786" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324590 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18b03d7-9247-4f08-b476-558e77605786" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324812 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2193ed97-12f7-437a-a441-222e00b8831d" containerName="keystone-db-sync" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324839 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18b03d7-9247-4f08-b476-558e77605786" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324859 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9f9f5b-1111-4f22-abe2-7146071528f9" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324872 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bf6c96-e816-4d9c-890e-e347005628ec" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324889 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c35c81-2111-46fb-b0c8-4e426d1d32f9" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324904 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97d19ba-991c-40e1-85cb-fd0402872336" containerName="mariadb-database-create" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324919 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4ddc80-a3e0-4ef0-930f-e8778893071b" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324926 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c424d7-fc72-42a8-a2f4-206786467a86" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324937 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78baf61-9a55-4017-a0fe-90336e976053" containerName="dnsmasq-dns" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.324947 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" containerName="mariadb-account-create-update" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.325868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.328765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.336460 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rz2kk"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.337992 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.354240 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.354554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.354793 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.357101 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hkvsm" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.360522 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rz2kk"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.364289 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437560 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7ctr\" (UniqueName: \"kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437633 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437649 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnnkg\" (UniqueName: \"kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437686 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437726 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437743 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437765 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437794 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.437884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539846 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7ctr\" (UniqueName: \"kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539882 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539906 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnnkg\" (UniqueName: \"kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.539963 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540003 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540028 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540154 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.540243 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.543646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.547453 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.548290 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.549094 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.549733 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.550313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.552126 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.554018 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.556202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.560482 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.603465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7ctr\" (UniqueName: \"kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr\") pod \"dnsmasq-dns-5c58d79dcf-dhrbb\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.609597 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.617201 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.618192 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnnkg\" (UniqueName: \"kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg\") pod \"keystone-bootstrap-rz2kk\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.632156 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.632555 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.632686 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ml5pp" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.632815 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.635581 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.666115 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.669116 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.696563 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.707195 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.718134 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.718398 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.745608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.745986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.746078 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.746173 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92tdf\" (UniqueName: \"kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.746298 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.814416 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.814449 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-f98bs"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.817510 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.824563 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qcpct" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.824763 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.825229 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.841572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f98bs"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92tdf\" (UniqueName: \"kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848910 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848936 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.848997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28cnp\" (UniqueName: \"kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849072 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849095 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849155 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.849176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.850601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.852066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.852562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.869975 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.918740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92tdf\" (UniqueName: \"kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf\") pod \"horizon-5d98f6d9fc-2mvcq\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.961979 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962062 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962240 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5pz\" (UniqueName: \"kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962364 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28cnp\" (UniqueName: \"kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.962531 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.963058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.963870 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tx8s8"] Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.979776 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.986811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.987628 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:29 crc kubenswrapper[4770]: I0126 19:01:29.990493 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.001913 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.002371 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bj6dd" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.003848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.031597 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.041061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28cnp\" (UniqueName: \"kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp\") pod \"ceilometer-0\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " pod="openstack/ceilometer-0" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.041359 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tx8s8"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070377 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvr55\" (UniqueName: \"kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070450 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b5pz\" (UniqueName: \"kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070587 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070616 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070645 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.070733 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.082806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.100875 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.108783 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.112026 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.114800 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.120266 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.122964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.123178 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b5pz\" (UniqueName: \"kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz\") pod \"cinder-db-sync-f98bs\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.148784 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.150658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.151729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f98bs" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.165823 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wjwrr"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.167049 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.170584 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.171000 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p8xgx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.171785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.172176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.172241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.173577 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.179349 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.179751 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.179866 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.179894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.180051 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mdx\" (UniqueName: \"kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.180125 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvr55\" (UniqueName: \"kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.180169 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.180603 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.191410 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.203328 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.205223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.216823 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvr55\" (UniqueName: \"kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55\") pod \"barbican-db-sync-tx8s8\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.233991 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wjwrr"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.247430 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286181 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286320 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286365 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhm62\" (UniqueName: \"kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286393 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286431 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcb6g\" (UniqueName: \"kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286462 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mdx\" (UniqueName: \"kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286478 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286502 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286522 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.286585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.294703 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.295249 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.295889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.295901 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.296222 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.310922 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mdx\" (UniqueName: \"kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx\") pod \"dnsmasq-dns-756947f775-qtsh2\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.330109 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.330151 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.330186 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.331426 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.331471 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0" gracePeriod=600 Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.358919 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-q2sdv"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.360120 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.364506 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.364810 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dhz8c" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.369864 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.376311 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-q2sdv"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.387911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.387966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhm62\" (UniqueName: \"kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388003 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcb6g\" (UniqueName: \"kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388063 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388116 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388201 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.388225 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.394085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.394609 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.394827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.395029 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.395245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.398184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.408860 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.411691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.416569 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcb6g\" (UniqueName: \"kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g\") pod \"placement-db-sync-wjwrr\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.416854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhm62\" (UniqueName: \"kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62\") pod \"horizon-5d65468f89-s89jx\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.491068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.491146 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8s4\" (UniqueName: \"kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.491268 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.491356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.493017 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cd84h"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.494140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.495471 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rb6zt" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.495692 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.495988 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.504262 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cd84h"] Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.507362 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.528568 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wjwrr" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.545474 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8s4\" (UniqueName: \"kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594720 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6t7\" (UniqueName: \"kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594943 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.594980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.603402 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.610070 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.612215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.634369 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8s4\" (UniqueName: \"kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4\") pod \"glance-db-sync-q2sdv\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.696650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.697196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv6t7\" (UniqueName: \"kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.697277 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.700093 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q2sdv" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.702425 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.703539 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.735706 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv6t7\" (UniqueName: \"kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7\") pod \"neutron-db-sync-cd84h\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:30 crc kubenswrapper[4770]: I0126 19:01:30.813623 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cd84h" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.082293 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0" exitCode=0 Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.082636 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0"} Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.082668 4770 scope.go:117] "RemoveContainer" containerID="759ad108705104ebfd180c02710e3cc9f867c8dcc0c0763f8371a75d18ecbaef" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.087715 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bx5vx" event={"ID":"e19ec737-f43c-4c4d-b6b0-16b535709eb6","Type":"ContainerDied","Data":"741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e"} Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.087756 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741f28c836fdf8b9ed3df013c06969c0c35faa90d7bdb910a0fcac704ac8256e" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.101626 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.214006 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle\") pod \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.214156 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9lm6\" (UniqueName: \"kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6\") pod \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.214217 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data\") pod \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.214275 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data\") pod \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\" (UID: \"e19ec737-f43c-4c4d-b6b0-16b535709eb6\") " Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.220159 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6" (OuterVolumeSpecName: "kube-api-access-b9lm6") pod "e19ec737-f43c-4c4d-b6b0-16b535709eb6" (UID: "e19ec737-f43c-4c4d-b6b0-16b535709eb6"). InnerVolumeSpecName "kube-api-access-b9lm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.223951 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e19ec737-f43c-4c4d-b6b0-16b535709eb6" (UID: "e19ec737-f43c-4c4d-b6b0-16b535709eb6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.249817 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e19ec737-f43c-4c4d-b6b0-16b535709eb6" (UID: "e19ec737-f43c-4c4d-b6b0-16b535709eb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.263486 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data" (OuterVolumeSpecName: "config-data") pod "e19ec737-f43c-4c4d-b6b0-16b535709eb6" (UID: "e19ec737-f43c-4c4d-b6b0-16b535709eb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.316689 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.316770 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.316782 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e19ec737-f43c-4c4d-b6b0-16b535709eb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.316793 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9lm6\" (UniqueName: \"kubernetes.io/projected/e19ec737-f43c-4c4d-b6b0-16b535709eb6-kube-api-access-b9lm6\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.444436 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tx8s8"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.461870 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.473217 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f98bs"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.497039 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.528132 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:01:31 crc kubenswrapper[4770]: W0126 19:01:31.528710 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a46e6d_4f19_4740_9b25_91465c0dc5fd.slice/crio-1b942599fac3e18b9976d04e16d791dc2cacb91e18b2499584a444823bb67086 WatchSource:0}: Error finding container 1b942599fac3e18b9976d04e16d791dc2cacb91e18b2499584a444823bb67086: Status 404 returned error can't find the container with id 1b942599fac3e18b9976d04e16d791dc2cacb91e18b2499584a444823bb67086 Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.584024 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wjwrr"] Jan 26 19:01:31 crc kubenswrapper[4770]: W0126 19:01:31.643869 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b185c8a_0b51_4433_9e44_2121cb5415ba.slice/crio-70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54 WatchSource:0}: Error finding container 70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54: Status 404 returned error can't find the container with id 70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54 Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.675664 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.691927 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.700328 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rz2kk"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.708952 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cd84h"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.815786 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-q2sdv"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.920832 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.970295 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:01:31 crc kubenswrapper[4770]: E0126 19:01:31.970680 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19ec737-f43c-4c4d-b6b0-16b535709eb6" containerName="watcher-db-sync" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.970691 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19ec737-f43c-4c4d-b6b0-16b535709eb6" containerName="watcher-db-sync" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.970904 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19ec737-f43c-4c4d-b6b0-16b535709eb6" containerName="watcher-db-sync" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.977803 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:31 crc kubenswrapper[4770]: I0126 19:01:31.989684 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.002890 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.037639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrmqz\" (UniqueName: \"kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.037728 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.037747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.037781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.037819 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.110494 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rz2kk" event={"ID":"76ea3b7c-d372-42fe-9499-8a236fa52d86","Type":"ContainerStarted","Data":"9332ad8edbc740a0a9b400cc66abcc6650b76b8cd1df3f7ab345b7e230af5d16"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.113850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756947f775-qtsh2" event={"ID":"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7","Type":"ContainerStarted","Data":"78fd270d2209f0875bf008fa0efb470759cef54c792500eaf3387a36bd48cb1d"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.119763 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tx8s8" event={"ID":"380a5f13-cc8e-42b0-92db-e487e61edcb9","Type":"ContainerStarted","Data":"096a7f7708fe2b4d37089dc3a5c6187e8083101164857360cf4799fe3b54f3f9"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.121329 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerStarted","Data":"7771d14ef808b96f2e5dcdace58a42e8faaa9c2d4242f073a2c0dbb6831dacb8"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.137184 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f98bs" event={"ID":"200a66de-48c2-4fad-babc-4e45e99790cd","Type":"ContainerStarted","Data":"9a63160420d572afa0a650c5ef481bca307c3d285819006739ef5d8391fc3e94"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.140115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrmqz\" (UniqueName: \"kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.140509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.140534 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.140577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.140634 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.151687 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wjwrr" event={"ID":"8cd21f2e-d98a-4363-afc3-5707b0ee540d","Type":"ContainerStarted","Data":"b33952e7c262496902f6176a637223b5f1b2ab1e7f17b779023161976301b394"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.152000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.154800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.156013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.156409 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrmqz\" (UniqueName: \"kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.160446 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts\") pod \"horizon-5f79ff69cc-httrz\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.161182 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cd84h" event={"ID":"0b185c8a-0b51-4433-9e44-2121cb5415ba","Type":"ContainerStarted","Data":"70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.174768 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.178190 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d98f6d9fc-2mvcq" event={"ID":"ba97d4cb-f979-4520-919c-891dac22767a","Type":"ContainerStarted","Data":"3576ba3f52503aa85fc9a222e91416458c1ab0c3678af9f05a385e85e980c3b8"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.182619 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cd84h" podStartSLOduration=2.182599342 podStartE2EDuration="2.182599342s" podCreationTimestamp="2026-01-26 19:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:32.182030227 +0000 UTC m=+1176.746936959" watchObservedRunningTime="2026-01-26 19:01:32.182599342 +0000 UTC m=+1176.747506074" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.183974 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" event={"ID":"25a46e6d-4f19-4740-9b25-91465c0dc5fd","Type":"ContainerStarted","Data":"1b942599fac3e18b9976d04e16d791dc2cacb91e18b2499584a444823bb67086"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.189493 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q2sdv" event={"ID":"9d149076-49cc-4a5a-80f8-c34dac1c2b45","Type":"ContainerStarted","Data":"d5bcbb8828c800f7e73aa6eeec67b631c46bd6b0b0f0d325f72b092baf28d9e1"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.191542 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d65468f89-s89jx" event={"ID":"028d5f93-dd95-4a7d-a5b0-8b6c1815019e","Type":"ContainerStarted","Data":"70ebeea42f83e072a55f7cda33f99603a8a534ee2da1320f70c93b8469684623"} Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.191577 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bx5vx" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.316163 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.393454 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.395021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.400157 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.403002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-k7ndx" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.411143 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.453579 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd8p\" (UniqueName: \"kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.453778 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.453907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.453930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.453986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.491030 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.492108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.497180 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.514342 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.540001 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.541228 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.544507 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.552863 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557757 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-config-data\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557801 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfd8p\" (UniqueName: \"kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557828 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5m22\" (UniqueName: \"kubernetes.io/projected/e5e85df5-499b-4543-aab5-e1d3ce9d1473-kube-api-access-z5m22\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557863 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557884 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557927 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e85df5-499b-4543-aab5-e1d3ce9d1473-logs\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.557993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.558374 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.575293 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.578774 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.593281 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.603649 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfd8p\" (UniqueName: \"kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p\") pod \"watcher-api-0\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660594 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-config-data\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660630 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5m22\" (UniqueName: \"kubernetes.io/projected/e5e85df5-499b-4543-aab5-e1d3ce9d1473-kube-api-access-z5m22\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660904 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e85df5-499b-4543-aab5-e1d3ce9d1473-logs\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.660999 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pptb6\" (UniqueName: \"kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.669673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-config-data\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.671573 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e85df5-499b-4543-aab5-e1d3ce9d1473-logs\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.680612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e85df5-499b-4543-aab5-e1d3ce9d1473-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.685137 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5m22\" (UniqueName: \"kubernetes.io/projected/e5e85df5-499b-4543-aab5-e1d3ce9d1473-kube-api-access-z5m22\") pod \"watcher-applier-0\" (UID: \"e5e85df5-499b-4543-aab5-e1d3ce9d1473\") " pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.744107 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.762794 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.762901 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pptb6\" (UniqueName: \"kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.762932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.762970 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.763033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.763312 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.767806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.772156 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.778369 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.802832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pptb6\" (UniqueName: \"kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6\") pod \"watcher-decision-engine-0\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.820411 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.826264 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.847001 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.852466 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.980189 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.980633 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.980702 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7ctr\" (UniqueName: \"kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.981312 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.981367 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.981436 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc\") pod \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\" (UID: \"25a46e6d-4f19-4740-9b25-91465c0dc5fd\") " Jan 26 19:01:32 crc kubenswrapper[4770]: I0126 19:01:32.989067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr" (OuterVolumeSpecName: "kube-api-access-j7ctr") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "kube-api-access-j7ctr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.006959 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config" (OuterVolumeSpecName: "config") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.013141 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.020249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.021422 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.027157 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25a46e6d-4f19-4740-9b25-91465c0dc5fd" (UID: "25a46e6d-4f19-4740-9b25-91465c0dc5fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.089987 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.090031 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7ctr\" (UniqueName: \"kubernetes.io/projected/25a46e6d-4f19-4740-9b25-91465c0dc5fd-kube-api-access-j7ctr\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.090044 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.090055 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.090066 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.090077 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a46e6d-4f19-4740-9b25-91465c0dc5fd-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.205397 4770 generic.go:334] "Generic (PLEG): container finished" podID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerID="485d376c5c8851e562c16fc2e57c3b348368c7dec1fd982de2c6e89e6af35d29" exitCode=0 Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.205467 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756947f775-qtsh2" event={"ID":"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7","Type":"ContainerDied","Data":"485d376c5c8851e562c16fc2e57c3b348368c7dec1fd982de2c6e89e6af35d29"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.212425 4770 generic.go:334] "Generic (PLEG): container finished" podID="25a46e6d-4f19-4740-9b25-91465c0dc5fd" containerID="2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3" exitCode=0 Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.212468 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.212503 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" event={"ID":"25a46e6d-4f19-4740-9b25-91465c0dc5fd","Type":"ContainerDied","Data":"2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.212550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c58d79dcf-dhrbb" event={"ID":"25a46e6d-4f19-4740-9b25-91465c0dc5fd","Type":"ContainerDied","Data":"1b942599fac3e18b9976d04e16d791dc2cacb91e18b2499584a444823bb67086"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.212568 4770 scope.go:117] "RemoveContainer" containerID="2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.218487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f79ff69cc-httrz" event={"ID":"3e78737b-a30f-410b-b5c4-7ea7fb79cae5","Type":"ContainerStarted","Data":"5bade9dfc39fa37063e0fa96aa07faffc92836990dee3076d5b68335c920c928"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.230955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cd84h" event={"ID":"0b185c8a-0b51-4433-9e44-2121cb5415ba","Type":"ContainerStarted","Data":"bc25c7e207907afb161546c49b113cee4d85aa5316c0704cad6c2422fbe7c529"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.245520 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rz2kk" event={"ID":"76ea3b7c-d372-42fe-9499-8a236fa52d86","Type":"ContainerStarted","Data":"00903d6abfbf16fa4eafacde69687e6d84a3a183cb253c689eedfbafe4d0fda0"} Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.273450 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rz2kk" podStartSLOduration=4.273430969 podStartE2EDuration="4.273430969s" podCreationTimestamp="2026-01-26 19:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:33.265935935 +0000 UTC m=+1177.830842677" watchObservedRunningTime="2026-01-26 19:01:33.273430969 +0000 UTC m=+1177.838337691" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.297929 4770 scope.go:117] "RemoveContainer" containerID="2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3" Jan 26 19:01:33 crc kubenswrapper[4770]: E0126 19:01:33.310279 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3\": container with ID starting with 2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3 not found: ID does not exist" containerID="2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.310318 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3"} err="failed to get container status \"2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3\": rpc error: code = NotFound desc = could not find container \"2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3\": container with ID starting with 2cbf3ba42069b251b1034510a166457d5bbb6d2c0837fda98d03d3288c1f0ff3 not found: ID does not exist" Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.339897 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.348150 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c58d79dcf-dhrbb"] Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.379197 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.429028 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.482779 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 26 19:01:33 crc kubenswrapper[4770]: I0126 19:01:33.780585 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a46e6d-4f19-4740-9b25-91465c0dc5fd" path="/var/lib/kubelet/pods/25a46e6d-4f19-4740-9b25-91465c0dc5fd/volumes" Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.289068 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e5e85df5-499b-4543-aab5-e1d3ce9d1473","Type":"ContainerStarted","Data":"d6c3ba9873da190b318f6b0dbc6d559548b0e2c47087fa45b7e4152d9eb07fe6"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.290823 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerStarted","Data":"a8e49ec4068a96558d84b99cb49cf8ed9f9175f2e8592a87dc625685d6e0d506"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.292832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756947f775-qtsh2" event={"ID":"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7","Type":"ContainerStarted","Data":"631a8fb295b08214f90f2ccbc9bd46529d99d14399365d5141a68416cbeb9ad5"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.294353 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.317393 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerStarted","Data":"667a00d1d32107d838d2a0496ca447f59a9b03b43941ba644edbc8a1e46292e8"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.317425 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerStarted","Data":"010bb8b3f14b304fb3c478ee2f2c39df24d878a242c6e8f795849de3257dda5a"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.317436 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerStarted","Data":"79f09ce2900194e3e37ee7e721aa63d0222deb019e0c0f6452970142000fab06"} Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.317447 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.318493 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": dial tcp 10.217.0.157:9322: connect: connection refused" Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.342471 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756947f775-qtsh2" podStartSLOduration=5.342456471 podStartE2EDuration="5.342456471s" podCreationTimestamp="2026-01-26 19:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:34.329310353 +0000 UTC m=+1178.894217085" watchObservedRunningTime="2026-01-26 19:01:34.342456471 +0000 UTC m=+1178.907363203" Jan 26 19:01:34 crc kubenswrapper[4770]: I0126 19:01:34.352591 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.352569737 podStartE2EDuration="2.352569737s" podCreationTimestamp="2026-01-26 19:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:01:34.348954339 +0000 UTC m=+1178.913861101" watchObservedRunningTime="2026-01-26 19:01:34.352569737 +0000 UTC m=+1178.917476489" Jan 26 19:01:37 crc kubenswrapper[4770]: I0126 19:01:37.745111 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:01:37 crc kubenswrapper[4770]: I0126 19:01:37.950131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.321245 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.340609 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:01:38 crc kubenswrapper[4770]: E0126 19:01:38.340983 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a46e6d-4f19-4740-9b25-91465c0dc5fd" containerName="init" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.340995 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a46e6d-4f19-4740-9b25-91465c0dc5fd" containerName="init" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.341153 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a46e6d-4f19-4740-9b25-91465c0dc5fd" containerName="init" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.342006 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.349445 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.382464 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407139 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407196 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407223 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407270 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85wb\" (UniqueName: \"kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407453 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.407536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.442436 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.495221 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77b47dc986-cqqn6"] Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.497092 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.508931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509149 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.509234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85wb\" (UniqueName: \"kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.513990 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.517003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.518361 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.519202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.543593 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.543879 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.547892 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77b47dc986-cqqn6"] Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.598602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85wb\" (UniqueName: \"kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb\") pod \"horizon-f47668778-9m4hm\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616425 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-tls-certs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b445e3-2f98-4b3d-9290-4e7eff894ef0-logs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-secret-key\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-scripts\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49f7\" (UniqueName: \"kubernetes.io/projected/65b445e3-2f98-4b3d-9290-4e7eff894ef0-kube-api-access-w49f7\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.616892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-config-data\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.617018 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-combined-ca-bundle\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.675197 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718348 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-combined-ca-bundle\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-tls-certs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b445e3-2f98-4b3d-9290-4e7eff894ef0-logs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-secret-key\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718586 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-scripts\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718640 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49f7\" (UniqueName: \"kubernetes.io/projected/65b445e3-2f98-4b3d-9290-4e7eff894ef0-kube-api-access-w49f7\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718683 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-config-data\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.718976 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b445e3-2f98-4b3d-9290-4e7eff894ef0-logs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.719777 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-scripts\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.720268 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65b445e3-2f98-4b3d-9290-4e7eff894ef0-config-data\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.723254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-secret-key\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.723531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-combined-ca-bundle\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.741432 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b445e3-2f98-4b3d-9290-4e7eff894ef0-horizon-tls-certs\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.742776 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49f7\" (UniqueName: \"kubernetes.io/projected/65b445e3-2f98-4b3d-9290-4e7eff894ef0-kube-api-access-w49f7\") pod \"horizon-77b47dc986-cqqn6\" (UID: \"65b445e3-2f98-4b3d-9290-4e7eff894ef0\") " pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:38 crc kubenswrapper[4770]: I0126 19:01:38.820282 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:01:39 crc kubenswrapper[4770]: I0126 19:01:39.370401 4770 generic.go:334] "Generic (PLEG): container finished" podID="76ea3b7c-d372-42fe-9499-8a236fa52d86" containerID="00903d6abfbf16fa4eafacde69687e6d84a3a183cb253c689eedfbafe4d0fda0" exitCode=0 Jan 26 19:01:39 crc kubenswrapper[4770]: I0126 19:01:39.370452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rz2kk" event={"ID":"76ea3b7c-d372-42fe-9499-8a236fa52d86","Type":"ContainerDied","Data":"00903d6abfbf16fa4eafacde69687e6d84a3a183cb253c689eedfbafe4d0fda0"} Jan 26 19:01:40 crc kubenswrapper[4770]: I0126 19:01:40.510482 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:01:40 crc kubenswrapper[4770]: I0126 19:01:40.571028 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:01:40 crc kubenswrapper[4770]: I0126 19:01:40.571325 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" containerID="cri-o://f741360c676c664ce9827e53e3f6fcc77e91d052b633fb159a0361adf506b1f4" gracePeriod=10 Jan 26 19:01:41 crc kubenswrapper[4770]: I0126 19:01:41.416556 4770 generic.go:334] "Generic (PLEG): container finished" podID="2607908d-b3c2-41a1-b445-386aacb914f1" containerID="f741360c676c664ce9827e53e3f6fcc77e91d052b633fb159a0361adf506b1f4" exitCode=0 Jan 26 19:01:41 crc kubenswrapper[4770]: I0126 19:01:41.416673 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" event={"ID":"2607908d-b3c2-41a1-b445-386aacb914f1","Type":"ContainerDied","Data":"f741360c676c664ce9827e53e3f6fcc77e91d052b633fb159a0361adf506b1f4"} Jan 26 19:01:42 crc kubenswrapper[4770]: I0126 19:01:42.419577 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Jan 26 19:01:42 crc kubenswrapper[4770]: I0126 19:01:42.745500 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 19:01:42 crc kubenswrapper[4770]: I0126 19:01:42.752336 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 19:01:43 crc kubenswrapper[4770]: I0126 19:01:43.465176 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.035316 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.035951 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" containerID="cri-o://010bb8b3f14b304fb3c478ee2f2c39df24d878a242c6e8f795849de3257dda5a" gracePeriod=30 Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.036085 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" containerID="cri-o://667a00d1d32107d838d2a0496ca447f59a9b03b43941ba644edbc8a1e46292e8" gracePeriod=30 Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.420593 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.748870 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": dial tcp 10.217.0.157:9322: connect: connection refused" Jan 26 19:01:47 crc kubenswrapper[4770]: I0126 19:01:47.748924 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": dial tcp 10.217.0.157:9322: connect: connection refused" Jan 26 19:01:47 crc kubenswrapper[4770]: E0126 19:01:47.870489 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:47 crc kubenswrapper[4770]: E0126 19:01:47.870834 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:47 crc kubenswrapper[4770]: E0126 19:01:47.870973 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbdh87h97h7h5f8h664h8chc4h58bh74hd6h6ch68h568h54fh9ch54h568h5ffh64bhcfh54dhf4h5fh5d9hf4h695h67bh6bh5c9h664h88q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrmqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5f79ff69cc-httrz_openstack(3e78737b-a30f-410b-b5c4-7ea7fb79cae5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:01:47 crc kubenswrapper[4770]: E0126 19:01:47.875879 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5f79ff69cc-httrz" podUID="3e78737b-a30f-410b-b5c4-7ea7fb79cae5" Jan 26 19:01:48 crc kubenswrapper[4770]: I0126 19:01:48.478970 4770 generic.go:334] "Generic (PLEG): container finished" podID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerID="667a00d1d32107d838d2a0496ca447f59a9b03b43941ba644edbc8a1e46292e8" exitCode=0 Jan 26 19:01:48 crc kubenswrapper[4770]: I0126 19:01:48.478996 4770 generic.go:334] "Generic (PLEG): container finished" podID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerID="010bb8b3f14b304fb3c478ee2f2c39df24d878a242c6e8f795849de3257dda5a" exitCode=143 Jan 26 19:01:48 crc kubenswrapper[4770]: I0126 19:01:48.479060 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerDied","Data":"667a00d1d32107d838d2a0496ca447f59a9b03b43941ba644edbc8a1e46292e8"} Jan 26 19:01:48 crc kubenswrapper[4770]: I0126 19:01:48.479118 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerDied","Data":"010bb8b3f14b304fb3c478ee2f2c39df24d878a242c6e8f795849de3257dda5a"} Jan 26 19:01:51 crc kubenswrapper[4770]: E0126 19:01:51.026739 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 26 19:01:51 crc kubenswrapper[4770]: E0126 19:01:51.027081 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 26 19:01:51 crc kubenswrapper[4770]: E0126 19:01:51.027234 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pcb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-wjwrr_openstack(8cd21f2e-d98a-4363-afc3-5707b0ee540d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:01:51 crc kubenswrapper[4770]: E0126 19:01:51.028933 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-wjwrr" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" Jan 26 19:01:51 crc kubenswrapper[4770]: E0126 19:01:51.517728 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-wjwrr" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" Jan 26 19:01:51 crc kubenswrapper[4770]: I0126 19:01:51.960638 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.091739 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.091834 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.092152 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnnkg\" (UniqueName: \"kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.092350 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.092387 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.092512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys\") pod \"76ea3b7c-d372-42fe-9499-8a236fa52d86\" (UID: \"76ea3b7c-d372-42fe-9499-8a236fa52d86\") " Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.100324 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.102083 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg" (OuterVolumeSpecName: "kube-api-access-cnnkg") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "kube-api-access-cnnkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.102311 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.117828 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.118061 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.118208 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvr55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-tx8s8_openstack(380a5f13-cc8e-42b0-92db-e487e61edcb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.118688 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts" (OuterVolumeSpecName: "scripts") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.119564 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-tx8s8" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.126916 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data" (OuterVolumeSpecName: "config-data") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.133839 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76ea3b7c-d372-42fe-9499-8a236fa52d86" (UID: "76ea3b7c-d372-42fe-9499-8a236fa52d86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.204959 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.204997 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.205009 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.205018 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnnkg\" (UniqueName: \"kubernetes.io/projected/76ea3b7c-d372-42fe-9499-8a236fa52d86-kube-api-access-cnnkg\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.205030 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.205038 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76ea3b7c-d372-42fe-9499-8a236fa52d86-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.227464 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.227900 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.228274 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch59bhd4h677hb8h668h65fh5dbh599h59dhc6hd8h5c6h55fh5dbh574hd9h6fhd6h669h5ffh5c6h5c6h95h65chf5h59bh5c6h566hb8h64bh64cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92tdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d98f6d9fc-2mvcq_openstack(ba97d4cb-f979-4520-919c-891dac22767a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.231574 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5d98f6d9fc-2mvcq" podUID="ba97d4cb-f979-4520-919c-891dac22767a" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.307881 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.307962 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.308151 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56bh76h574h65dh99h5b4hcbhbch5bch77h66dh76hf6h68bh59h58bh89h58h59dh5d6h567hb5h85h664h68fh679h54h5fch89h67dhbhfbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhm62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d65468f89-s89jx_openstack(028d5f93-dd95-4a7d-a5b0-8b6c1815019e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.312625 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-5d65468f89-s89jx" podUID="028d5f93-dd95-4a7d-a5b0-8b6c1815019e" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.525987 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rz2kk" event={"ID":"76ea3b7c-d372-42fe-9499-8a236fa52d86","Type":"ContainerDied","Data":"9332ad8edbc740a0a9b400cc66abcc6650b76b8cd1df3f7ab345b7e230af5d16"} Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.526091 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9332ad8edbc740a0a9b400cc66abcc6650b76b8cd1df3f7ab345b7e230af5d16" Jan 26 19:01:52 crc kubenswrapper[4770]: I0126 19:01:52.526027 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rz2kk" Jan 26 19:01:52 crc kubenswrapper[4770]: E0126 19:01:52.528305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-tx8s8" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.078605 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rz2kk"] Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.087738 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rz2kk"] Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.178272 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sz7sx"] Jan 26 19:01:53 crc kubenswrapper[4770]: E0126 19:01:53.178779 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ea3b7c-d372-42fe-9499-8a236fa52d86" containerName="keystone-bootstrap" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.178796 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ea3b7c-d372-42fe-9499-8a236fa52d86" containerName="keystone-bootstrap" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.179017 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ea3b7c-d372-42fe-9499-8a236fa52d86" containerName="keystone-bootstrap" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.179796 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.181374 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.181721 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hkvsm" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.181961 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.182516 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.182783 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.204918 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sz7sx"] Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.328783 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.329039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsbt\" (UniqueName: \"kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.329119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.329301 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.329358 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.329442 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430820 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsbt\" (UniqueName: \"kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430873 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430909 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.430985 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.435311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.436474 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.436761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.438299 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.439861 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.456283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsbt\" (UniqueName: \"kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt\") pod \"keystone-bootstrap-sz7sx\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.498286 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:01:53 crc kubenswrapper[4770]: I0126 19:01:53.780327 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ea3b7c-d372-42fe-9499-8a236fa52d86" path="/var/lib/kubelet/pods/76ea3b7c-d372-42fe-9499-8a236fa52d86/volumes" Jan 26 19:01:57 crc kubenswrapper[4770]: I0126 19:01:57.420405 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 19:01:57 crc kubenswrapper[4770]: I0126 19:01:57.421543 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:01:57 crc kubenswrapper[4770]: I0126 19:01:57.746140 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:01:57 crc kubenswrapper[4770]: I0126 19:01:57.746278 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:01 crc kubenswrapper[4770]: E0126 19:02:01.951946 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 19:02:01 crc kubenswrapper[4770]: E0126 19:02:01.952518 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 26 19:02:01 crc kubenswrapper[4770]: E0126 19:02:01.952645 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4v8s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-q2sdv_openstack(9d149076-49cc-4a5a-80f8-c34dac1c2b45): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:02:01 crc kubenswrapper[4770]: E0126 19:02:01.953871 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-q2sdv" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" Jan 26 19:02:02 crc kubenswrapper[4770]: I0126 19:02:02.422148 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 19:02:02 crc kubenswrapper[4770]: E0126 19:02:02.617912 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-q2sdv" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" Jan 26 19:02:02 crc kubenswrapper[4770]: I0126 19:02:02.746823 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:02 crc kubenswrapper[4770]: I0126 19:02:02.746901 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:02 crc kubenswrapper[4770]: I0126 19:02:02.747279 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:02:02 crc kubenswrapper[4770]: I0126 19:02:02.747331 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:02:07 crc kubenswrapper[4770]: I0126 19:02:07.422861 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 19:02:07 crc kubenswrapper[4770]: I0126 19:02:07.657662 4770 generic.go:334] "Generic (PLEG): container finished" podID="0b185c8a-0b51-4433-9e44-2121cb5415ba" containerID="bc25c7e207907afb161546c49b113cee4d85aa5316c0704cad6c2422fbe7c529" exitCode=0 Jan 26 19:02:07 crc kubenswrapper[4770]: I0126 19:02:07.657758 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cd84h" event={"ID":"0b185c8a-0b51-4433-9e44-2121cb5415ba","Type":"ContainerDied","Data":"bc25c7e207907afb161546c49b113cee4d85aa5316c0704cad6c2422fbe7c529"} Jan 26 19:02:07 crc kubenswrapper[4770]: I0126 19:02:07.747645 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:07 crc kubenswrapper[4770]: I0126 19:02:07.747750 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.116229 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.247847 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts\") pod \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.247952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key\") pod \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.247991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrmqz\" (UniqueName: \"kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz\") pod \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248054 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data\") pod \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248086 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs\") pod \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\" (UID: \"3e78737b-a30f-410b-b5c4-7ea7fb79cae5\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts" (OuterVolumeSpecName: "scripts") pod "3e78737b-a30f-410b-b5c4-7ea7fb79cae5" (UID: "3e78737b-a30f-410b-b5c4-7ea7fb79cae5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248539 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs" (OuterVolumeSpecName: "logs") pod "3e78737b-a30f-410b-b5c4-7ea7fb79cae5" (UID: "3e78737b-a30f-410b-b5c4-7ea7fb79cae5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.248685 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data" (OuterVolumeSpecName: "config-data") pod "3e78737b-a30f-410b-b5c4-7ea7fb79cae5" (UID: "3e78737b-a30f-410b-b5c4-7ea7fb79cae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.255918 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3e78737b-a30f-410b-b5c4-7ea7fb79cae5" (UID: "3e78737b-a30f-410b-b5c4-7ea7fb79cae5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.255982 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz" (OuterVolumeSpecName: "kube-api-access-hrmqz") pod "3e78737b-a30f-410b-b5c4-7ea7fb79cae5" (UID: "3e78737b-a30f-410b-b5c4-7ea7fb79cae5"). InnerVolumeSpecName "kube-api-access-hrmqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.350146 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.350181 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrmqz\" (UniqueName: \"kubernetes.io/projected/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-kube-api-access-hrmqz\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.350193 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.350201 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e78737b-a30f-410b-b5c4-7ea7fb79cae5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: E0126 19:02:10.372153 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 19:02:10 crc kubenswrapper[4770]: E0126 19:02:10.372206 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 26 19:02:10 crc kubenswrapper[4770]: E0126 19:02:10.372321 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.223:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65fh65dh9ch678h58bh656h67bh97h546h697h5bfh59dh64bh58ch5d6h98hb4h559h5dchfh5f6h67h55h67dh577h579h95h5fch6bhd7h64dhc9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28cnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(859f9d5b-265e-4d91-a4e1-faca291a3073): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.469051 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.478954 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.495066 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.501261 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.522305 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cd84h" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.654542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts\") pod \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.654603 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data\") pod \"ba97d4cb-f979-4520-919c-891dac22767a\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.654634 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655284 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data" (OuterVolumeSpecName: "config-data") pod "ba97d4cb-f979-4520-919c-891dac22767a" (UID: "ba97d4cb-f979-4520-919c-891dac22767a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655266 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts" (OuterVolumeSpecName: "scripts") pod "028d5f93-dd95-4a7d-a5b0-8b6c1815019e" (UID: "028d5f93-dd95-4a7d-a5b0-8b6c1815019e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.654687 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfd8p\" (UniqueName: \"kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p\") pod \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs\") pod \"ba97d4cb-f979-4520-919c-891dac22767a\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655480 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655511 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle\") pod \"0b185c8a-0b51-4433-9e44-2121cb5415ba\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655536 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655571 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhm62\" (UniqueName: \"kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62\") pod \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655594 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data\") pod \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655615 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92tdf\" (UniqueName: \"kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf\") pod \"ba97d4cb-f979-4520-919c-891dac22767a\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs\") pod \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655681 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts\") pod \"ba97d4cb-f979-4520-919c-891dac22767a\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655743 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655819 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle\") pod \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655878 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca\") pod \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655911 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfnm\" (UniqueName: \"kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655934 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv6t7\" (UniqueName: \"kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7\") pod \"0b185c8a-0b51-4433-9e44-2121cb5415ba\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.655981 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config\") pod \"0b185c8a-0b51-4433-9e44-2121cb5415ba\" (UID: \"0b185c8a-0b51-4433-9e44-2121cb5415ba\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.656004 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs\") pod \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.656030 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data\") pod \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\" (UID: \"221c0ba1-05b9-4079-9ba6-e9efe82d66c8\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.656058 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key\") pod \"ba97d4cb-f979-4520-919c-891dac22767a\" (UID: \"ba97d4cb-f979-4520-919c-891dac22767a\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.656090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb\") pod \"2607908d-b3c2-41a1-b445-386aacb914f1\" (UID: \"2607908d-b3c2-41a1-b445-386aacb914f1\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.656118 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key\") pod \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\" (UID: \"028d5f93-dd95-4a7d-a5b0-8b6c1815019e\") " Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.657045 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.657343 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.659910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs" (OuterVolumeSpecName: "logs") pod "ba97d4cb-f979-4520-919c-891dac22767a" (UID: "ba97d4cb-f979-4520-919c-891dac22767a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.659914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs" (OuterVolumeSpecName: "logs") pod "221c0ba1-05b9-4079-9ba6-e9efe82d66c8" (UID: "221c0ba1-05b9-4079-9ba6-e9efe82d66c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.659997 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs" (OuterVolumeSpecName: "logs") pod "028d5f93-dd95-4a7d-a5b0-8b6c1815019e" (UID: "028d5f93-dd95-4a7d-a5b0-8b6c1815019e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.661063 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p" (OuterVolumeSpecName: "kube-api-access-wfd8p") pod "221c0ba1-05b9-4079-9ba6-e9efe82d66c8" (UID: "221c0ba1-05b9-4079-9ba6-e9efe82d66c8"). InnerVolumeSpecName "kube-api-access-wfd8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.661223 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data" (OuterVolumeSpecName: "config-data") pod "028d5f93-dd95-4a7d-a5b0-8b6c1815019e" (UID: "028d5f93-dd95-4a7d-a5b0-8b6c1815019e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.661650 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "028d5f93-dd95-4a7d-a5b0-8b6c1815019e" (UID: "028d5f93-dd95-4a7d-a5b0-8b6c1815019e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.665078 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf" (OuterVolumeSpecName: "kube-api-access-92tdf") pod "ba97d4cb-f979-4520-919c-891dac22767a" (UID: "ba97d4cb-f979-4520-919c-891dac22767a"). InnerVolumeSpecName "kube-api-access-92tdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.665549 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts" (OuterVolumeSpecName: "scripts") pod "ba97d4cb-f979-4520-919c-891dac22767a" (UID: "ba97d4cb-f979-4520-919c-891dac22767a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.668605 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62" (OuterVolumeSpecName: "kube-api-access-jhm62") pod "028d5f93-dd95-4a7d-a5b0-8b6c1815019e" (UID: "028d5f93-dd95-4a7d-a5b0-8b6c1815019e"). InnerVolumeSpecName "kube-api-access-jhm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.668880 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm" (OuterVolumeSpecName: "kube-api-access-9mfnm") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "kube-api-access-9mfnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.670511 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ba97d4cb-f979-4520-919c-891dac22767a" (UID: "ba97d4cb-f979-4520-919c-891dac22767a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.673887 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7" (OuterVolumeSpecName: "kube-api-access-bv6t7") pod "0b185c8a-0b51-4433-9e44-2121cb5415ba" (UID: "0b185c8a-0b51-4433-9e44-2121cb5415ba"). InnerVolumeSpecName "kube-api-access-bv6t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.690721 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d98f6d9fc-2mvcq" event={"ID":"ba97d4cb-f979-4520-919c-891dac22767a","Type":"ContainerDied","Data":"3576ba3f52503aa85fc9a222e91416458c1ab0c3678af9f05a385e85e980c3b8"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.690788 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d98f6d9fc-2mvcq" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.697025 4770 scope.go:117] "RemoveContainer" containerID="aeae434cd3a1dad9880146a71ce8ee1d57e60b9bcad71d956f5bf84614ac8323" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.697398 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" event={"ID":"2607908d-b3c2-41a1-b445-386aacb914f1","Type":"ContainerDied","Data":"cbc16fe85b85cb698b3018aaeda366f25234d17702dd9b3597880431d8d8a799"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.697435 4770 scope.go:117] "RemoveContainer" containerID="f741360c676c664ce9827e53e3f6fcc77e91d052b633fb159a0361adf506b1f4" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.697459 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.701036 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"221c0ba1-05b9-4079-9ba6-e9efe82d66c8","Type":"ContainerDied","Data":"79f09ce2900194e3e37ee7e721aa63d0222deb019e0c0f6452970142000fab06"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.701053 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.705013 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f79ff69cc-httrz" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.705128 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f79ff69cc-httrz" event={"ID":"3e78737b-a30f-410b-b5c4-7ea7fb79cae5","Type":"ContainerDied","Data":"5bade9dfc39fa37063e0fa96aa07faffc92836990dee3076d5b68335c920c928"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.708666 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "221c0ba1-05b9-4079-9ba6-e9efe82d66c8" (UID: "221c0ba1-05b9-4079-9ba6-e9efe82d66c8"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.709514 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d65468f89-s89jx" event={"ID":"028d5f93-dd95-4a7d-a5b0-8b6c1815019e","Type":"ContainerDied","Data":"70ebeea42f83e072a55f7cda33f99603a8a534ee2da1320f70c93b8469684623"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.709579 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d65468f89-s89jx" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.710572 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b185c8a-0b51-4433-9e44-2121cb5415ba" (UID: "0b185c8a-0b51-4433-9e44-2121cb5415ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.711030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cd84h" event={"ID":"0b185c8a-0b51-4433-9e44-2121cb5415ba","Type":"ContainerDied","Data":"70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54"} Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.711056 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c98161f680dd7052b665fcb3b7b1a7c263549d2248500641c2872f11eb7b54" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.711087 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cd84h" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.722414 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "221c0ba1-05b9-4079-9ba6-e9efe82d66c8" (UID: "221c0ba1-05b9-4079-9ba6-e9efe82d66c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.729347 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config" (OuterVolumeSpecName: "config") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.730395 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config" (OuterVolumeSpecName: "config") pod "0b185c8a-0b51-4433-9e44-2121cb5415ba" (UID: "0b185c8a-0b51-4433-9e44-2121cb5415ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.733810 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.736249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.744051 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.751374 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2607908d-b3c2-41a1-b445-386aacb914f1" (UID: "2607908d-b3c2-41a1-b445-386aacb914f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758468 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758496 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758506 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758514 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfd8p\" (UniqueName: \"kubernetes.io/projected/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-kube-api-access-wfd8p\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758522 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba97d4cb-f979-4520-919c-891dac22767a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758532 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758540 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758548 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758558 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhm62\" (UniqueName: \"kubernetes.io/projected/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-kube-api-access-jhm62\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758566 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758575 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92tdf\" (UniqueName: \"kubernetes.io/projected/ba97d4cb-f979-4520-919c-891dac22767a-kube-api-access-92tdf\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758582 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758590 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba97d4cb-f979-4520-919c-891dac22767a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758598 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2607908d-b3c2-41a1-b445-386aacb914f1-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758605 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758613 4770 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758622 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfnm\" (UniqueName: \"kubernetes.io/projected/2607908d-b3c2-41a1-b445-386aacb914f1-kube-api-access-9mfnm\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758630 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv6t7\" (UniqueName: \"kubernetes.io/projected/0b185c8a-0b51-4433-9e44-2121cb5415ba-kube-api-access-bv6t7\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758639 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b185c8a-0b51-4433-9e44-2121cb5415ba-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758647 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/028d5f93-dd95-4a7d-a5b0-8b6c1815019e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.758655 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ba97d4cb-f979-4520-919c-891dac22767a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.761200 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data" (OuterVolumeSpecName: "config-data") pod "221c0ba1-05b9-4079-9ba6-e9efe82d66c8" (UID: "221c0ba1-05b9-4079-9ba6-e9efe82d66c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.794850 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.812668 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d98f6d9fc-2mvcq"] Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.833482 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.841450 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f79ff69cc-httrz"] Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.860529 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221c0ba1-05b9-4079-9ba6-e9efe82d66c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.861991 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:02:10 crc kubenswrapper[4770]: I0126 19:02:10.884311 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d65468f89-s89jx"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.046429 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.056753 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d87d859d9-ll7rh"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.135504 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.148682 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160131 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:02:11 crc kubenswrapper[4770]: E0126 19:02:11.160601 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b185c8a-0b51-4433-9e44-2121cb5415ba" containerName="neutron-db-sync" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160625 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b185c8a-0b51-4433-9e44-2121cb5415ba" containerName="neutron-db-sync" Jan 26 19:02:11 crc kubenswrapper[4770]: E0126 19:02:11.160639 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160649 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" Jan 26 19:02:11 crc kubenswrapper[4770]: E0126 19:02:11.160662 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160673 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" Jan 26 19:02:11 crc kubenswrapper[4770]: E0126 19:02:11.160718 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160726 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" Jan 26 19:02:11 crc kubenswrapper[4770]: E0126 19:02:11.160738 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="init" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160745 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="init" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160936 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160958 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.160980 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.161000 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b185c8a-0b51-4433-9e44-2121cb5415ba" containerName="neutron-db-sync" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.161970 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.164511 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.164554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.164819 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.187426 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.266879 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-config-data\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.266961 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.267016 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.267033 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe6a16b-f234-4dcc-800e-7eb6338cc264-logs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.267055 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-public-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.267104 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.267205 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dpl5\" (UniqueName: \"kubernetes.io/projected/fbe6a16b-f234-4dcc-800e-7eb6338cc264-kube-api-access-2dpl5\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-config-data\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369347 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe6a16b-f234-4dcc-800e-7eb6338cc264-logs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369471 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-public-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369541 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.369568 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dpl5\" (UniqueName: \"kubernetes.io/projected/fbe6a16b-f234-4dcc-800e-7eb6338cc264-kube-api-access-2dpl5\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.377618 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.377957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe6a16b-f234-4dcc-800e-7eb6338cc264-logs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.378961 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-public-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.379608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-config-data\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.380324 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.381251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fbe6a16b-f234-4dcc-800e-7eb6338cc264-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.401381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dpl5\" (UniqueName: \"kubernetes.io/projected/fbe6a16b-f234-4dcc-800e-7eb6338cc264-kube-api-access-2dpl5\") pod \"watcher-api-0\" (UID: \"fbe6a16b-f234-4dcc-800e-7eb6338cc264\") " pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.491882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.787974 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028d5f93-dd95-4a7d-a5b0-8b6c1815019e" path="/var/lib/kubelet/pods/028d5f93-dd95-4a7d-a5b0-8b6c1815019e/volumes" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.788423 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" path="/var/lib/kubelet/pods/221c0ba1-05b9-4079-9ba6-e9efe82d66c8/volumes" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.789087 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" path="/var/lib/kubelet/pods/2607908d-b3c2-41a1-b445-386aacb914f1/volumes" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.790191 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e78737b-a30f-410b-b5c4-7ea7fb79cae5" path="/var/lib/kubelet/pods/3e78737b-a30f-410b-b5c4-7ea7fb79cae5/volumes" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.790607 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba97d4cb-f979-4520-919c-891dac22767a" path="/var/lib/kubelet/pods/ba97d4cb-f979-4520-919c-891dac22767a/volumes" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.880332 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.886819 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.923536 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.946883 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.948594 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.957508 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.957681 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.957890 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rb6zt" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.958277 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.986261 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.986658 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.986811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.987046 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.987428 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9wh\" (UniqueName: \"kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:11 crc kubenswrapper[4770]: I0126 19:02:11.987851 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.012977 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:02:12 crc kubenswrapper[4770]: E0126 19:02:12.013839 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 19:02:12 crc kubenswrapper[4770]: E0126 19:02:12.014014 4770 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 26 19:02:12 crc kubenswrapper[4770]: E0126 19:02:12.014539 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8b5pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-f98bs_openstack(200a66de-48c2-4fad-babc-4e45e99790cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 19:02:12 crc kubenswrapper[4770]: E0126 19:02:12.016448 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-f98bs" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.020116 4770 scope.go:117] "RemoveContainer" containerID="fadae1190b353aa0003827894e7c2a13994c60638c759de95df227882a77c2d9" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.053118 4770 scope.go:117] "RemoveContainer" containerID="703a49ed8544395bb8dc435e829ca9d108a2da655c3ead922cbbab8d0528cfe9" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091669 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9wh\" (UniqueName: \"kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091838 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091935 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.091962 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.092010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.092096 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.092404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.092423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.093958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.095261 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.096970 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.097185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.097276 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.125237 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9wh\" (UniqueName: \"kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh\") pod \"dnsmasq-dns-7878674dd9-pkgz7\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.194542 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.194611 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.194686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.194731 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.194808 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.199426 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.201605 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.205595 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.211224 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.212869 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd\") pod \"neutron-74fdc6454-kxn5b\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.238419 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.291522 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.297316 4770 scope.go:117] "RemoveContainer" containerID="10daab1d8fac251eeb01091619019cc6bd09909e75e0f39f212b1886fc2bd748" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.344346 4770 scope.go:117] "RemoveContainer" containerID="667a00d1d32107d838d2a0496ca447f59a9b03b43941ba644edbc8a1e46292e8" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.390556 4770 scope.go:117] "RemoveContainer" containerID="010bb8b3f14b304fb3c478ee2f2c39df24d878a242c6e8f795849de3257dda5a" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.430129 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d87d859d9-ll7rh" podUID="2607908d-b3c2-41a1-b445-386aacb914f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.595762 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77b47dc986-cqqn6"] Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.603145 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.751815 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.752169 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="221c0ba1-05b9-4079-9ba6-e9efe82d66c8" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.771258 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerStarted","Data":"c89767cdff396cb5d71c63a8907cd883a50d04d95d098934b7512579ad6d1885"} Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.788790 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerStarted","Data":"606523be3b1a5051c8560d0dea73b8f9a87810b1791f5d86df063d3fad8cdfbc"} Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.804917 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wjwrr" event={"ID":"8cd21f2e-d98a-4363-afc3-5707b0ee540d","Type":"ContainerStarted","Data":"2b3d11a27f6e7d1b76edaf917c2ad0fc65b2bdb9bba43ca41fca50e159770ad7"} Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.822354 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.824671 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=12.407747308 podStartE2EDuration="40.824647581s" podCreationTimestamp="2026-01-26 19:01:32 +0000 UTC" firstStartedPulling="2026-01-26 19:01:33.441027144 +0000 UTC m=+1178.005933876" lastFinishedPulling="2026-01-26 19:02:01.857927397 +0000 UTC m=+1206.422834149" observedRunningTime="2026-01-26 19:02:12.818604526 +0000 UTC m=+1217.383511258" watchObservedRunningTime="2026-01-26 19:02:12.824647581 +0000 UTC m=+1217.389554313" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.871172 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wjwrr" podStartSLOduration=3.34989229 podStartE2EDuration="43.871151521s" podCreationTimestamp="2026-01-26 19:01:29 +0000 UTC" firstStartedPulling="2026-01-26 19:01:31.620245332 +0000 UTC m=+1176.185152064" lastFinishedPulling="2026-01-26 19:02:12.141504563 +0000 UTC m=+1216.706411295" observedRunningTime="2026-01-26 19:02:12.853516849 +0000 UTC m=+1217.418423581" watchObservedRunningTime="2026-01-26 19:02:12.871151521 +0000 UTC m=+1217.436058263" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.879574 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" probeResult="failure" output="" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.905990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e5e85df5-499b-4543-aab5-e1d3ce9d1473","Type":"ContainerStarted","Data":"0e1a5a165e0683a30cff88971458421f4cd1232a55073e2c6b8dd04c610d2b39"} Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.932818 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.165223218 podStartE2EDuration="40.932804514s" podCreationTimestamp="2026-01-26 19:01:32 +0000 UTC" firstStartedPulling="2026-01-26 19:01:33.615573879 +0000 UTC m=+1178.180480611" lastFinishedPulling="2026-01-26 19:02:10.383155175 +0000 UTC m=+1214.948061907" observedRunningTime="2026-01-26 19:02:12.930108181 +0000 UTC m=+1217.495014903" watchObservedRunningTime="2026-01-26 19:02:12.932804514 +0000 UTC m=+1217.497711246" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.944522 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tx8s8" event={"ID":"380a5f13-cc8e-42b0-92db-e487e61edcb9","Type":"ContainerStarted","Data":"96f1180ba7c1a64658df24ca29485dd9bae37d7debfac2c9edd662e7afa48114"} Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.946010 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b47dc986-cqqn6" event={"ID":"65b445e3-2f98-4b3d-9290-4e7eff894ef0","Type":"ContainerStarted","Data":"9ee7b0ff8e56fab3855b5158a71bd984a9acd7186d5b06793566a17909acfeef"} Jan 26 19:02:12 crc kubenswrapper[4770]: E0126 19:02:12.953281 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-f98bs" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" Jan 26 19:02:12 crc kubenswrapper[4770]: I0126 19:02:12.982782 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tx8s8" podStartSLOduration=3.3685203980000002 podStartE2EDuration="43.982744097s" podCreationTimestamp="2026-01-26 19:01:29 +0000 UTC" firstStartedPulling="2026-01-26 19:01:31.473541017 +0000 UTC m=+1176.038447739" lastFinishedPulling="2026-01-26 19:02:12.087764706 +0000 UTC m=+1216.652671438" observedRunningTime="2026-01-26 19:02:12.981366229 +0000 UTC m=+1217.546273071" watchObservedRunningTime="2026-01-26 19:02:12.982744097 +0000 UTC m=+1217.547650829" Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.078088 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.100918 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sz7sx"] Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.147099 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.188405 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.554765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.957841 4770 generic.go:334] "Generic (PLEG): container finished" podID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerID="27781c49422cc1b57d9b88957770b80f522190aaf26026159320aaf1558791ee" exitCode=0 Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.957900 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" event={"ID":"b5223e91-68cc-4d7a-91ca-c58e530ef973","Type":"ContainerDied","Data":"27781c49422cc1b57d9b88957770b80f522190aaf26026159320aaf1558791ee"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.957932 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" event={"ID":"b5223e91-68cc-4d7a-91ca-c58e530ef973","Type":"ContainerStarted","Data":"2eba6ded34245e9b2c2fb43d0e4be28c21ae9daa3a71ee375f1ed7e2be6b9c7c"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.960387 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"fbe6a16b-f234-4dcc-800e-7eb6338cc264","Type":"ContainerStarted","Data":"df25ba2c8c701b25304919380894ab4d5e55686e24bb8722fb9ae0655a92a6a3"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.960427 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"fbe6a16b-f234-4dcc-800e-7eb6338cc264","Type":"ContainerStarted","Data":"5a246c27107238e2f9db49738dc4395d06838f5b84dc29a4d49db115e839bd89"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.963737 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz7sx" event={"ID":"2a1913a3-ef04-48f1-9e48-d669c97e66cb","Type":"ContainerStarted","Data":"9c588a6d2154e7bd801452921a77a730a74ec89edbdc20f69b021498ae749d2f"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.963783 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz7sx" event={"ID":"2a1913a3-ef04-48f1-9e48-d669c97e66cb","Type":"ContainerStarted","Data":"e12a644ab29dfb939190f5f3bb66977ca29f40a84dacd332c8e1bf1b5459cd2d"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.970961 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerStarted","Data":"00d8410891c3266be02e94a7492de06621996331830bc8b8d3cfe1d17da1f3fb"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.971008 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerStarted","Data":"424e384591d9962673acb328847231755ae004f2ec839d227ef88b67b1f4fa9e"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.983488 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerStarted","Data":"878b584b6d82e40268b7328bcd1750de9e387945f0448164e1d8bab2d9e25aa3"} Jan 26 19:02:13 crc kubenswrapper[4770]: I0126 19:02:13.987368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b47dc986-cqqn6" event={"ID":"65b445e3-2f98-4b3d-9290-4e7eff894ef0","Type":"ContainerStarted","Data":"317e02bdb0fcd1c5a43a58eba9c7db874a731da1ea52adcf7eb87c6f598eb7fd"} Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.017517 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sz7sx" podStartSLOduration=21.017491553 podStartE2EDuration="21.017491553s" podCreationTimestamp="2026-01-26 19:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:14.008161769 +0000 UTC m=+1218.573068511" watchObservedRunningTime="2026-01-26 19:02:14.017491553 +0000 UTC m=+1218.582398285" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.040389 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f47668778-9m4hm" podStartSLOduration=35.89610201 podStartE2EDuration="36.040367828s" podCreationTimestamp="2026-01-26 19:01:38 +0000 UTC" firstStartedPulling="2026-01-26 19:02:12.714174576 +0000 UTC m=+1217.279081298" lastFinishedPulling="2026-01-26 19:02:12.858440384 +0000 UTC m=+1217.423347116" observedRunningTime="2026-01-26 19:02:14.030843838 +0000 UTC m=+1218.595750580" watchObservedRunningTime="2026-01-26 19:02:14.040367828 +0000 UTC m=+1218.605274560" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.451608 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.485396 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.490149 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.490547 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.490590 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.613876 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6m5\" (UniqueName: \"kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.613951 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.613981 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.614021 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.614058 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.614096 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.614115 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.716912 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6m5\" (UniqueName: \"kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717236 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.717310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.723975 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.727501 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.739453 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.754052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.757586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.766525 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.767510 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6m5\" (UniqueName: \"kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5\") pod \"neutron-56d4478bc7-wx9fs\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:14 crc kubenswrapper[4770]: I0126 19:02:14.976492 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:15 crc kubenswrapper[4770]: I0126 19:02:15.708160 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:15 crc kubenswrapper[4770]: W0126 19:02:15.718171 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85972ec_8d1c_4d0a_9696_a8c2bae4607f.slice/crio-750520402cb7212668cd6e898f0d80efee21bf8231dc2b0712eaf5d35a0d3289 WatchSource:0}: Error finding container 750520402cb7212668cd6e898f0d80efee21bf8231dc2b0712eaf5d35a0d3289: Status 404 returned error can't find the container with id 750520402cb7212668cd6e898f0d80efee21bf8231dc2b0712eaf5d35a0d3289 Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.018159 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77b47dc986-cqqn6" event={"ID":"65b445e3-2f98-4b3d-9290-4e7eff894ef0","Type":"ContainerStarted","Data":"36e87a5320cb6c29d41873048268b2f86add8116ded7348f968874ab7c031e6a"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.025462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerStarted","Data":"750520402cb7212668cd6e898f0d80efee21bf8231dc2b0712eaf5d35a0d3289"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.028127 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" event={"ID":"b5223e91-68cc-4d7a-91ca-c58e530ef973","Type":"ContainerStarted","Data":"a8892a7c83369c1efbbecd25002cc5eec8186c20a1bf4fb34296877cad6d6feb"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.028432 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.053135 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"fbe6a16b-f234-4dcc-800e-7eb6338cc264","Type":"ContainerStarted","Data":"08e1818b1d8bcdad2f1c5beda2f29b5cedfb4dd9ff59ea496fe5b12bcc538c8e"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.056303 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.060564 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77b47dc986-cqqn6" podStartSLOduration=37.88527884 podStartE2EDuration="38.060542184s" podCreationTimestamp="2026-01-26 19:01:38 +0000 UTC" firstStartedPulling="2026-01-26 19:02:12.683996392 +0000 UTC m=+1217.248903124" lastFinishedPulling="2026-01-26 19:02:12.859259736 +0000 UTC m=+1217.424166468" observedRunningTime="2026-01-26 19:02:16.050800988 +0000 UTC m=+1220.615707720" watchObservedRunningTime="2026-01-26 19:02:16.060542184 +0000 UTC m=+1220.625448916" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.063324 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerStarted","Data":"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.063515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerStarted","Data":"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.065264 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.090062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerStarted","Data":"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc"} Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.104704 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" podStartSLOduration=5.104671548 podStartE2EDuration="5.104671548s" podCreationTimestamp="2026-01-26 19:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:16.095722714 +0000 UTC m=+1220.660629446" watchObservedRunningTime="2026-01-26 19:02:16.104671548 +0000 UTC m=+1220.669578290" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.130636 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.130611627 podStartE2EDuration="5.130611627s" podCreationTimestamp="2026-01-26 19:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:16.121142088 +0000 UTC m=+1220.686048820" watchObservedRunningTime="2026-01-26 19:02:16.130611627 +0000 UTC m=+1220.695518369" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.148506 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74fdc6454-kxn5b" podStartSLOduration=5.148486724 podStartE2EDuration="5.148486724s" podCreationTimestamp="2026-01-26 19:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:16.144920297 +0000 UTC m=+1220.709827029" watchObservedRunningTime="2026-01-26 19:02:16.148486724 +0000 UTC m=+1220.713393466" Jan 26 19:02:16 crc kubenswrapper[4770]: I0126 19:02:16.492320 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 26 19:02:17 crc kubenswrapper[4770]: I0126 19:02:17.098867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerStarted","Data":"bc3f922a8c90ab70df5d9e39eaf37090517994e25776d5cda6209a84eb615cc1"} Jan 26 19:02:17 crc kubenswrapper[4770]: I0126 19:02:17.826979 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.106546 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.676085 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.676729 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.820838 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.820919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:02:18 crc kubenswrapper[4770]: I0126 19:02:18.874217 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 19:02:19 crc kubenswrapper[4770]: I0126 19:02:19.119119 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerStarted","Data":"56bed967675182c3b2fd83364e4a6690c7d3127df5c2fa061c93f26b4908d9ba"} Jan 26 19:02:19 crc kubenswrapper[4770]: I0126 19:02:19.120228 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:19 crc kubenswrapper[4770]: I0126 19:02:19.142326 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56d4478bc7-wx9fs" podStartSLOduration=5.142308269 podStartE2EDuration="5.142308269s" podCreationTimestamp="2026-01-26 19:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:19.140495959 +0000 UTC m=+1223.705402691" watchObservedRunningTime="2026-01-26 19:02:19.142308269 +0000 UTC m=+1223.707214991" Jan 26 19:02:20 crc kubenswrapper[4770]: I0126 19:02:20.129187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q2sdv" event={"ID":"9d149076-49cc-4a5a-80f8-c34dac1c2b45","Type":"ContainerStarted","Data":"725d45f04a1a6f23a0aa0a8e35fac44c1410e163f48c70408f194d0a3641477a"} Jan 26 19:02:20 crc kubenswrapper[4770]: I0126 19:02:20.150260 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-q2sdv" podStartSLOduration=3.22763297 podStartE2EDuration="50.150239212s" podCreationTimestamp="2026-01-26 19:01:30 +0000 UTC" firstStartedPulling="2026-01-26 19:01:31.859654567 +0000 UTC m=+1176.424561299" lastFinishedPulling="2026-01-26 19:02:18.782260809 +0000 UTC m=+1223.347167541" observedRunningTime="2026-01-26 19:02:20.142860581 +0000 UTC m=+1224.707767313" watchObservedRunningTime="2026-01-26 19:02:20.150239212 +0000 UTC m=+1224.715145944" Jan 26 19:02:21 crc kubenswrapper[4770]: I0126 19:02:21.493202 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 26 19:02:21 crc kubenswrapper[4770]: I0126 19:02:21.508997 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.157221 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.242844 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.339273 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.339879 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756947f775-qtsh2" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="dnsmasq-dns" containerID="cri-o://631a8fb295b08214f90f2ccbc9bd46529d99d14399365d5141a68416cbeb9ad5" gracePeriod=10 Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.826999 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.852564 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.852892 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.863861 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 26 19:02:22 crc kubenswrapper[4770]: I0126 19:02:22.884564 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.164201 4770 generic.go:334] "Generic (PLEG): container finished" podID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerID="631a8fb295b08214f90f2ccbc9bd46529d99d14399365d5141a68416cbeb9ad5" exitCode=0 Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.164281 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756947f775-qtsh2" event={"ID":"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7","Type":"ContainerDied","Data":"631a8fb295b08214f90f2ccbc9bd46529d99d14399365d5141a68416cbeb9ad5"} Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.165455 4770 generic.go:334] "Generic (PLEG): container finished" podID="2a1913a3-ef04-48f1-9e48-d669c97e66cb" containerID="9c588a6d2154e7bd801452921a77a730a74ec89edbdc20f69b021498ae749d2f" exitCode=0 Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.165590 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz7sx" event={"ID":"2a1913a3-ef04-48f1-9e48-d669c97e66cb","Type":"ContainerDied","Data":"9c588a6d2154e7bd801452921a77a730a74ec89edbdc20f69b021498ae749d2f"} Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.216995 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.256798 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.415494 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.415556 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.415624 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4mdx\" (UniqueName: \"kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.416305 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.416361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.416460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb\") pod \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\" (UID: \"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7\") " Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.421942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx" (OuterVolumeSpecName: "kube-api-access-s4mdx") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "kube-api-access-s4mdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.510991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.511224 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.511322 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.511367 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.513485 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config" (OuterVolumeSpecName: "config") pod "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" (UID: "aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519111 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519143 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519158 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4mdx\" (UniqueName: \"kubernetes.io/projected/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-kube-api-access-s4mdx\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519169 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519181 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:23 crc kubenswrapper[4770]: I0126 19:02:23.519193 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:24 crc kubenswrapper[4770]: I0126 19:02:24.176994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756947f775-qtsh2" event={"ID":"aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7","Type":"ContainerDied","Data":"78fd270d2209f0875bf008fa0efb470759cef54c792500eaf3387a36bd48cb1d"} Jan 26 19:02:24 crc kubenswrapper[4770]: I0126 19:02:24.177113 4770 scope.go:117] "RemoveContainer" containerID="631a8fb295b08214f90f2ccbc9bd46529d99d14399365d5141a68416cbeb9ad5" Jan 26 19:02:24 crc kubenswrapper[4770]: I0126 19:02:24.177120 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756947f775-qtsh2" Jan 26 19:02:24 crc kubenswrapper[4770]: I0126 19:02:24.205421 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:02:24 crc kubenswrapper[4770]: I0126 19:02:24.209923 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756947f775-qtsh2"] Jan 26 19:02:25 crc kubenswrapper[4770]: I0126 19:02:25.783938 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" path="/var/lib/kubelet/pods/aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7/volumes" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.147466 4770 scope.go:117] "RemoveContainer" containerID="485d376c5c8851e562c16fc2e57c3b348368c7dec1fd982de2c6e89e6af35d29" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.200073 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sz7sx" event={"ID":"2a1913a3-ef04-48f1-9e48-d669c97e66cb","Type":"ContainerDied","Data":"e12a644ab29dfb939190f5f3bb66977ca29f40a84dacd332c8e1bf1b5459cd2d"} Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.200351 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12a644ab29dfb939190f5f3bb66977ca29f40a84dacd332c8e1bf1b5459cd2d" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.203480 4770 generic.go:334] "Generic (PLEG): container finished" podID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" containerID="2b3d11a27f6e7d1b76edaf917c2ad0fc65b2bdb9bba43ca41fca50e159770ad7" exitCode=0 Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.203546 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wjwrr" event={"ID":"8cd21f2e-d98a-4363-afc3-5707b0ee540d","Type":"ContainerDied","Data":"2b3d11a27f6e7d1b76edaf917c2ad0fc65b2bdb9bba43ca41fca50e159770ad7"} Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.377538 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479329 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479438 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479475 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479568 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479709 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.479789 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsbt\" (UniqueName: \"kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt\") pod \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\" (UID: \"2a1913a3-ef04-48f1-9e48-d669c97e66cb\") " Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.486129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt" (OuterVolumeSpecName: "kube-api-access-czsbt") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "kube-api-access-czsbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.491903 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.493297 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.496655 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts" (OuterVolumeSpecName: "scripts") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.518003 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.543383 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data" (OuterVolumeSpecName: "config-data") pod "2a1913a3-ef04-48f1-9e48-d669c97e66cb" (UID: "2a1913a3-ef04-48f1-9e48-d669c97e66cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581886 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581933 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581945 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsbt\" (UniqueName: \"kubernetes.io/projected/2a1913a3-ef04-48f1-9e48-d669c97e66cb-kube-api-access-czsbt\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581957 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581965 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:26 crc kubenswrapper[4770]: I0126 19:02:26.581973 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2a1913a3-ef04-48f1-9e48-d669c97e66cb-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.215356 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f98bs" event={"ID":"200a66de-48c2-4fad-babc-4e45e99790cd","Type":"ContainerStarted","Data":"1ccec2d55f09f36fd639413394264d95c642e5899e056eb76b6565b818f4a0f3"} Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.218820 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerID="c89767cdff396cb5d71c63a8907cd883a50d04d95d098934b7512579ad6d1885" exitCode=1 Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.218942 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerDied","Data":"c89767cdff396cb5d71c63a8907cd883a50d04d95d098934b7512579ad6d1885"} Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.219657 4770 scope.go:117] "RemoveContainer" containerID="c89767cdff396cb5d71c63a8907cd883a50d04d95d098934b7512579ad6d1885" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.224937 4770 generic.go:334] "Generic (PLEG): container finished" podID="380a5f13-cc8e-42b0-92db-e487e61edcb9" containerID="96f1180ba7c1a64658df24ca29485dd9bae37d7debfac2c9edd662e7afa48114" exitCode=0 Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.225040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tx8s8" event={"ID":"380a5f13-cc8e-42b0-92db-e487e61edcb9","Type":"ContainerDied","Data":"96f1180ba7c1a64658df24ca29485dd9bae37d7debfac2c9edd662e7afa48114"} Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.232161 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sz7sx" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.232863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerStarted","Data":"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe"} Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.235502 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-f98bs" podStartSLOduration=3.498810315 podStartE2EDuration="58.235486602s" podCreationTimestamp="2026-01-26 19:01:29 +0000 UTC" firstStartedPulling="2026-01-26 19:01:31.527590873 +0000 UTC m=+1176.092497605" lastFinishedPulling="2026-01-26 19:02:26.26426716 +0000 UTC m=+1230.829173892" observedRunningTime="2026-01-26 19:02:27.234079704 +0000 UTC m=+1231.798986446" watchObservedRunningTime="2026-01-26 19:02:27.235486602 +0000 UTC m=+1231.800393334" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.538848 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-dfccf5f44-hghd8"] Jan 26 19:02:27 crc kubenswrapper[4770]: E0126 19:02:27.539767 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="dnsmasq-dns" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.539806 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="dnsmasq-dns" Jan 26 19:02:27 crc kubenswrapper[4770]: E0126 19:02:27.539827 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="init" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.539835 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="init" Jan 26 19:02:27 crc kubenswrapper[4770]: E0126 19:02:27.539848 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1913a3-ef04-48f1-9e48-d669c97e66cb" containerName="keystone-bootstrap" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.539856 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1913a3-ef04-48f1-9e48-d669c97e66cb" containerName="keystone-bootstrap" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.540080 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabc0430-08cc-40d5-8ddf-c6cd29d5a9a7" containerName="dnsmasq-dns" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.540118 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1913a3-ef04-48f1-9e48-d669c97e66cb" containerName="keystone-bootstrap" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.540959 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.545748 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dfccf5f44-hghd8"] Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.549855 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.550578 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.550713 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.550876 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.550893 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.550911 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hkvsm" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641632 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrlc\" (UniqueName: \"kubernetes.io/projected/d119257f-62e4-4f5b-8c56-3bd82b5b6041-kube-api-access-xcrlc\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641691 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-scripts\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641778 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-config-data\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641827 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-fernet-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-combined-ca-bundle\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-credential-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-public-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.641971 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-internal-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744491 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-fernet-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-combined-ca-bundle\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744619 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-credential-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-public-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-internal-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744747 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrlc\" (UniqueName: \"kubernetes.io/projected/d119257f-62e4-4f5b-8c56-3bd82b5b6041-kube-api-access-xcrlc\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.744776 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-scripts\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.746317 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-config-data\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.750466 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wjwrr" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.751066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-internal-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.751337 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-fernet-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.752383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-public-tls-certs\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.754412 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-scripts\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.755481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-combined-ca-bundle\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.756258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-config-data\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.758869 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d119257f-62e4-4f5b-8c56-3bd82b5b6041-credential-keys\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.773241 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrlc\" (UniqueName: \"kubernetes.io/projected/d119257f-62e4-4f5b-8c56-3bd82b5b6041-kube-api-access-xcrlc\") pod \"keystone-dfccf5f44-hghd8\" (UID: \"d119257f-62e4-4f5b-8c56-3bd82b5b6041\") " pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.849267 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcb6g\" (UniqueName: \"kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g\") pod \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.849486 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs\") pod \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.849616 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data\") pod \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.849843 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle\") pod \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.850638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts\") pod \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\" (UID: \"8cd21f2e-d98a-4363-afc3-5707b0ee540d\") " Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.852028 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs" (OuterVolumeSpecName: "logs") pod "8cd21f2e-d98a-4363-afc3-5707b0ee540d" (UID: "8cd21f2e-d98a-4363-afc3-5707b0ee540d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.856829 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts" (OuterVolumeSpecName: "scripts") pod "8cd21f2e-d98a-4363-afc3-5707b0ee540d" (UID: "8cd21f2e-d98a-4363-afc3-5707b0ee540d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.857020 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g" (OuterVolumeSpecName: "kube-api-access-pcb6g") pod "8cd21f2e-d98a-4363-afc3-5707b0ee540d" (UID: "8cd21f2e-d98a-4363-afc3-5707b0ee540d"). InnerVolumeSpecName "kube-api-access-pcb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.896894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data" (OuterVolumeSpecName: "config-data") pod "8cd21f2e-d98a-4363-afc3-5707b0ee540d" (UID: "8cd21f2e-d98a-4363-afc3-5707b0ee540d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.897029 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.904817 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cd21f2e-d98a-4363-afc3-5707b0ee540d" (UID: "8cd21f2e-d98a-4363-afc3-5707b0ee540d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.953019 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcb6g\" (UniqueName: \"kubernetes.io/projected/8cd21f2e-d98a-4363-afc3-5707b0ee540d-kube-api-access-pcb6g\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.953432 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cd21f2e-d98a-4363-afc3-5707b0ee540d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.953449 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.953463 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:27 crc kubenswrapper[4770]: I0126 19:02:27.953474 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd21f2e-d98a-4363-afc3-5707b0ee540d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.256610 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wjwrr" event={"ID":"8cd21f2e-d98a-4363-afc3-5707b0ee540d","Type":"ContainerDied","Data":"b33952e7c262496902f6176a637223b5f1b2ab1e7f17b779023161976301b394"} Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.256658 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b33952e7c262496902f6176a637223b5f1b2ab1e7f17b779023161976301b394" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.256740 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wjwrr" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.266094 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerStarted","Data":"8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4"} Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.421627 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dfdbdd84d-x7fsz"] Jan 26 19:02:28 crc kubenswrapper[4770]: E0126 19:02:28.422232 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" containerName="placement-db-sync" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.422249 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" containerName="placement-db-sync" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.422437 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" containerName="placement-db-sync" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.423362 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.426933 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.427069 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.427252 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.427627 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p8xgx" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.428140 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.452292 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dfdbdd84d-x7fsz"] Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7f6z\" (UniqueName: \"kubernetes.io/projected/a884b73b-0f60-4327-a836-b9c20f70b6e6-kube-api-access-b7f6z\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515549 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-combined-ca-bundle\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-public-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a884b73b-0f60-4327-a836-b9c20f70b6e6-logs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515627 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-internal-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-config-data\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.515682 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-scripts\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.522898 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dfccf5f44-hghd8"] Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619732 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-public-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a884b73b-0f60-4327-a836-b9c20f70b6e6-logs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619805 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-internal-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619853 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-config-data\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619868 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-scripts\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619917 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7f6z\" (UniqueName: \"kubernetes.io/projected/a884b73b-0f60-4327-a836-b9c20f70b6e6-kube-api-access-b7f6z\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.619968 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-combined-ca-bundle\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.623187 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a884b73b-0f60-4327-a836-b9c20f70b6e6-logs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.626016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-combined-ca-bundle\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.626183 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-internal-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.641412 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-public-tls-certs\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.660423 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-config-data\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.660550 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a884b73b-0f60-4327-a836-b9c20f70b6e6-scripts\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.665770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7f6z\" (UniqueName: \"kubernetes.io/projected/a884b73b-0f60-4327-a836-b9c20f70b6e6-kube-api-access-b7f6z\") pod \"placement-5dfdbdd84d-x7fsz\" (UID: \"a884b73b-0f60-4327-a836-b9c20f70b6e6\") " pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.754286 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.759021 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.835956 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77b47dc986-cqqn6" podUID="65b445e3-2f98-4b3d-9290-4e7eff894ef0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.926303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data\") pod \"380a5f13-cc8e-42b0-92db-e487e61edcb9\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.926419 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvr55\" (UniqueName: \"kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55\") pod \"380a5f13-cc8e-42b0-92db-e487e61edcb9\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.926496 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle\") pod \"380a5f13-cc8e-42b0-92db-e487e61edcb9\" (UID: \"380a5f13-cc8e-42b0-92db-e487e61edcb9\") " Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.934293 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "380a5f13-cc8e-42b0-92db-e487e61edcb9" (UID: "380a5f13-cc8e-42b0-92db-e487e61edcb9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.937018 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55" (OuterVolumeSpecName: "kube-api-access-lvr55") pod "380a5f13-cc8e-42b0-92db-e487e61edcb9" (UID: "380a5f13-cc8e-42b0-92db-e487e61edcb9"). InnerVolumeSpecName "kube-api-access-lvr55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:28 crc kubenswrapper[4770]: I0126 19:02:28.972900 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "380a5f13-cc8e-42b0-92db-e487e61edcb9" (UID: "380a5f13-cc8e-42b0-92db-e487e61edcb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.029407 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.029441 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvr55\" (UniqueName: \"kubernetes.io/projected/380a5f13-cc8e-42b0-92db-e487e61edcb9-kube-api-access-lvr55\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.029458 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380a5f13-cc8e-42b0-92db-e487e61edcb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.293443 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tx8s8" event={"ID":"380a5f13-cc8e-42b0-92db-e487e61edcb9","Type":"ContainerDied","Data":"096a7f7708fe2b4d37089dc3a5c6187e8083101164857360cf4799fe3b54f3f9"} Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.293784 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="096a7f7708fe2b4d37089dc3a5c6187e8083101164857360cf4799fe3b54f3f9" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.293527 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tx8s8" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.301969 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dfccf5f44-hghd8" event={"ID":"d119257f-62e4-4f5b-8c56-3bd82b5b6041","Type":"ContainerStarted","Data":"9b45a11ac31fad9209b91def4123573c56924ed26e22f357500eae07e80329f4"} Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.302032 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dfccf5f44-hghd8" event={"ID":"d119257f-62e4-4f5b-8c56-3bd82b5b6041","Type":"ContainerStarted","Data":"5a9179946699066d2c0d765fd4866e6daea6530dd016c1f1e8d2f8aa5fef089e"} Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.302797 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.332251 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-dfccf5f44-hghd8" podStartSLOduration=2.332231975 podStartE2EDuration="2.332231975s" podCreationTimestamp="2026-01-26 19:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:29.326184819 +0000 UTC m=+1233.891091551" watchObservedRunningTime="2026-01-26 19:02:29.332231975 +0000 UTC m=+1233.897138697" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.444507 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dfdbdd84d-x7fsz"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.568232 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-647c797856-n9jkj"] Jan 26 19:02:29 crc kubenswrapper[4770]: E0126 19:02:29.568879 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" containerName="barbican-db-sync" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.568897 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" containerName="barbican-db-sync" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.569064 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" containerName="barbican-db-sync" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.570056 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.579081 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.579168 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.579738 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bj6dd" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.595846 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647c797856-n9jkj"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.655452 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f49cc977f-jfnpn"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.656584 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data-custom\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.656635 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.656669 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crg2r\" (UniqueName: \"kubernetes.io/projected/e3807ac3-64e8-4132-8b60-59d034d69c52-kube-api-access-crg2r\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.656749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-combined-ca-bundle\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.656808 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3807ac3-64e8-4132-8b60-59d034d69c52-logs\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.657781 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.663162 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.758828 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f49cc977f-jfnpn"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759826 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data-custom\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759857 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759894 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crg2r\" (UniqueName: \"kubernetes.io/projected/e3807ac3-64e8-4132-8b60-59d034d69c52-kube-api-access-crg2r\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data-custom\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.759988 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-logs\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.760012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-combined-ca-bundle\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.760041 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-combined-ca-bundle\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.760066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3807ac3-64e8-4132-8b60-59d034d69c52-logs\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.760095 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp4q\" (UniqueName: \"kubernetes.io/projected/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-kube-api-access-dhp4q\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.768920 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3807ac3-64e8-4132-8b60-59d034d69c52-logs\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.771188 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data-custom\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.771502 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-config-data\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.773563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3807ac3-64e8-4132-8b60-59d034d69c52-combined-ca-bundle\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.786185 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.787544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.797690 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.798399 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crg2r\" (UniqueName: \"kubernetes.io/projected/e3807ac3-64e8-4132-8b60-59d034d69c52-kube-api-access-crg2r\") pod \"barbican-keystone-listener-647c797856-n9jkj\" (UID: \"e3807ac3-64e8-4132-8b60-59d034d69c52\") " pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861496 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blr6\" (UniqueName: \"kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861578 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861619 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861660 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861703 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.861811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.865267 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data-custom\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.865332 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-logs\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.865429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-combined-ca-bundle\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.865484 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp4q\" (UniqueName: \"kubernetes.io/projected/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-kube-api-access-dhp4q\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.866667 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-logs\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.873374 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.874027 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-combined-ca-bundle\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.877920 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.882000 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.891896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-config-data-custom\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.893096 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.897284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp4q\" (UniqueName: \"kubernetes.io/projected/5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd-kube-api-access-dhp4q\") pod \"barbican-worker-6f49cc977f-jfnpn\" (UID: \"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd\") " pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.911575 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.964868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967026 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cnm\" (UniqueName: \"kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blr6\" (UniqueName: \"kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967123 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967205 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967283 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967313 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.967330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.968216 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.969040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.969520 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.970087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.970589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:29 crc kubenswrapper[4770]: I0126 19:02:29.989913 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blr6\" (UniqueName: \"kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6\") pod \"dnsmasq-dns-578c4bbfdc-rppnp\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.037497 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f49cc977f-jfnpn" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.068606 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.068663 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.068698 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.068791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cnm\" (UniqueName: \"kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.068854 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.073983 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.079210 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.080184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.081003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.115821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cnm\" (UniqueName: \"kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm\") pod \"barbican-api-6b6b9fb758-6nb49\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.140217 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.223439 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.362066 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfdbdd84d-x7fsz" event={"ID":"a884b73b-0f60-4327-a836-b9c20f70b6e6","Type":"ContainerStarted","Data":"f1202099d17721a40471e3340c5183ad7c3fe4e29fb76e2b07bebd583a5fefa6"} Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.558710 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647c797856-n9jkj"] Jan 26 19:02:30 crc kubenswrapper[4770]: W0126 19:02:30.574564 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3807ac3_64e8_4132_8b60_59d034d69c52.slice/crio-3fa16a9f40c3dcfc2f0ac3963008ff3afe3605b95c563f90c707ebde22a473b2 WatchSource:0}: Error finding container 3fa16a9f40c3dcfc2f0ac3963008ff3afe3605b95c563f90c707ebde22a473b2: Status 404 returned error can't find the container with id 3fa16a9f40c3dcfc2f0ac3963008ff3afe3605b95c563f90c707ebde22a473b2 Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.651757 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f49cc977f-jfnpn"] Jan 26 19:02:30 crc kubenswrapper[4770]: W0126 19:02:30.664148 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aef95c5_2dc6_49e0_b2fa_b33b501c9bdd.slice/crio-ec11530162e0474646d95b5c185b9ee23172109520221f5abd4fc200ce1a270d WatchSource:0}: Error finding container ec11530162e0474646d95b5c185b9ee23172109520221f5abd4fc200ce1a270d: Status 404 returned error can't find the container with id ec11530162e0474646d95b5c185b9ee23172109520221f5abd4fc200ce1a270d Jan 26 19:02:30 crc kubenswrapper[4770]: I0126 19:02:30.941156 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.099011 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.406437 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" event={"ID":"15ebf879-39fd-4f97-8d59-053c1a600e85","Type":"ContainerStarted","Data":"bd7b0b226278b140d2bfa980919deeaf59f4984a14721baafd22d2be3a5c1f76"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.416446 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfdbdd84d-x7fsz" event={"ID":"a884b73b-0f60-4327-a836-b9c20f70b6e6","Type":"ContainerStarted","Data":"8824bb6e23f7d13e40adce54aa9ac999d4fa56819dc8ce404eb3cee89bac0a16"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.416770 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfdbdd84d-x7fsz" event={"ID":"a884b73b-0f60-4327-a836-b9c20f70b6e6","Type":"ContainerStarted","Data":"bf6a045f6beec7cdbde49c28d14ab790978e78663ca296c0cc8fd29899c666da"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.416825 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.416850 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.420135 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f49cc977f-jfnpn" event={"ID":"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd","Type":"ContainerStarted","Data":"ec11530162e0474646d95b5c185b9ee23172109520221f5abd4fc200ce1a270d"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.430243 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" event={"ID":"e3807ac3-64e8-4132-8b60-59d034d69c52","Type":"ContainerStarted","Data":"3fa16a9f40c3dcfc2f0ac3963008ff3afe3605b95c563f90c707ebde22a473b2"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.432134 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerStarted","Data":"329f25b12499781371ee2d0d7dca387a4e0cbeb752591c4d378eb72de4384871"} Jan 26 19:02:31 crc kubenswrapper[4770]: I0126 19:02:31.448484 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dfdbdd84d-x7fsz" podStartSLOduration=3.448464947 podStartE2EDuration="3.448464947s" podCreationTimestamp="2026-01-26 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:31.439789739 +0000 UTC m=+1236.004696491" watchObservedRunningTime="2026-01-26 19:02:31.448464947 +0000 UTC m=+1236.013371679" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.409394 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.459839 4770 generic.go:334] "Generic (PLEG): container finished" podID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerID="2ef3004668b2d9e8f4d28ffb56fd009badc36ce84dbf78db48710584335f8cfd" exitCode=0 Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.459956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" event={"ID":"15ebf879-39fd-4f97-8d59-053c1a600e85","Type":"ContainerDied","Data":"2ef3004668b2d9e8f4d28ffb56fd009badc36ce84dbf78db48710584335f8cfd"} Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.467808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerStarted","Data":"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a"} Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.467861 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerStarted","Data":"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db"} Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.511844 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b6b9fb758-6nb49" podStartSLOduration=3.511828585 podStartE2EDuration="3.511828585s" podCreationTimestamp="2026-01-26 19:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:32.507453335 +0000 UTC m=+1237.072360078" watchObservedRunningTime="2026-01-26 19:02:32.511828585 +0000 UTC m=+1237.076735317" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.822008 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.822346 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:32 crc kubenswrapper[4770]: E0126 19:02:32.823196 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4 is running failed: container process not found" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 26 19:02:32 crc kubenswrapper[4770]: E0126 19:02:32.825019 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4 is running failed: container process not found" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 26 19:02:32 crc kubenswrapper[4770]: E0126 19:02:32.826042 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4 is running failed: container process not found" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 26 19:02:32 crc kubenswrapper[4770]: E0126 19:02:32.826194 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.917161 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-545575dfd-bbtbf"] Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.928925 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-545575dfd-bbtbf"] Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.929025 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.938267 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 19:02:32 crc kubenswrapper[4770]: I0126 19:02:32.938743 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.071959 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ff829b-eabe-4d50-a22f-4da3d6cf798f-logs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072035 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-public-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072181 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-combined-ca-bundle\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072253 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-internal-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072398 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data-custom\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.072519 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ql8\" (UniqueName: \"kubernetes.io/projected/46ff829b-eabe-4d50-a22f-4da3d6cf798f-kube-api-access-82ql8\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.174838 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.174918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data-custom\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.174961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ql8\" (UniqueName: \"kubernetes.io/projected/46ff829b-eabe-4d50-a22f-4da3d6cf798f-kube-api-access-82ql8\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.175049 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ff829b-eabe-4d50-a22f-4da3d6cf798f-logs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.175086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-public-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.175131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-combined-ca-bundle\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.175189 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-internal-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.180912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.189269 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-internal-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.189545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ff829b-eabe-4d50-a22f-4da3d6cf798f-logs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.190567 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-public-tls-certs\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.193627 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-combined-ca-bundle\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.193939 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ff829b-eabe-4d50-a22f-4da3d6cf798f-config-data-custom\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.202491 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ql8\" (UniqueName: \"kubernetes.io/projected/46ff829b-eabe-4d50-a22f-4da3d6cf798f-kube-api-access-82ql8\") pod \"barbican-api-545575dfd-bbtbf\" (UID: \"46ff829b-eabe-4d50-a22f-4da3d6cf798f\") " pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.270108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.480479 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" exitCode=1 Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.481764 4770 scope.go:117] "RemoveContainer" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" Jan 26 19:02:33 crc kubenswrapper[4770]: E0126 19:02:33.482022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ba7a2e1d-7c6b-4d89-ac01-5a93fb071444)\"" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.482274 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerDied","Data":"8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4"} Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.482314 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.482342 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:33 crc kubenswrapper[4770]: I0126 19:02:33.482613 4770 scope.go:117] "RemoveContainer" containerID="c89767cdff396cb5d71c63a8907cd883a50d04d95d098934b7512579ad6d1885" Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.495881 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-545575dfd-bbtbf"] Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.518705 4770 scope.go:117] "RemoveContainer" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.519923 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" event={"ID":"e3807ac3-64e8-4132-8b60-59d034d69c52","Type":"ContainerStarted","Data":"e4daa867092106823a136481d04170adc909d2d7321231f3d3e35a3b7597b5ac"} Jan 26 19:02:34 crc kubenswrapper[4770]: E0126 19:02:34.521053 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ba7a2e1d-7c6b-4d89-ac01-5a93fb071444)\"" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.533563 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" event={"ID":"15ebf879-39fd-4f97-8d59-053c1a600e85","Type":"ContainerStarted","Data":"6e8b555a7423b88b024771946627a03b8c05fae48df5eeddb563d408fa362ac4"} Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.533752 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.543317 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f49cc977f-jfnpn" event={"ID":"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd","Type":"ContainerStarted","Data":"e736161e94cf27e4e5ff2584bd2259cdbd227a24aa25c66f3cf28103fb7c0fc5"} Jan 26 19:02:34 crc kubenswrapper[4770]: I0126 19:02:34.582311 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" podStartSLOduration=5.582286781 podStartE2EDuration="5.582286781s" podCreationTimestamp="2026-01-26 19:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:34.568278377 +0000 UTC m=+1239.133185129" watchObservedRunningTime="2026-01-26 19:02:34.582286781 +0000 UTC m=+1239.147193513" Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.075608 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.558056 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-545575dfd-bbtbf" event={"ID":"46ff829b-eabe-4d50-a22f-4da3d6cf798f","Type":"ContainerStarted","Data":"95d5ff9f64edfe3625e56ab7b52afcea5f706fc15082651c29213ad84380f8be"} Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.558412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-545575dfd-bbtbf" event={"ID":"46ff829b-eabe-4d50-a22f-4da3d6cf798f","Type":"ContainerStarted","Data":"9707fc326fba212d966544e1b35d3d4e7d6867f6684f42a1fe08ed5f13a27633"} Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.560631 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f49cc977f-jfnpn" event={"ID":"5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd","Type":"ContainerStarted","Data":"7748a772b6d4a257c2200724dadf082bb20ceaa65073056453ee7fab02acd693"} Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.565321 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" event={"ID":"e3807ac3-64e8-4132-8b60-59d034d69c52","Type":"ContainerStarted","Data":"bf983d021ea69d0e6b63e54a975f637b3cd4c7b1bdb79559a0dd2ba88d30868a"} Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.585944 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f49cc977f-jfnpn" podStartSLOduration=3.247539559 podStartE2EDuration="6.585925582s" podCreationTimestamp="2026-01-26 19:02:29 +0000 UTC" firstStartedPulling="2026-01-26 19:02:30.667980474 +0000 UTC m=+1235.232887206" lastFinishedPulling="2026-01-26 19:02:34.006366497 +0000 UTC m=+1238.571273229" observedRunningTime="2026-01-26 19:02:35.574800366 +0000 UTC m=+1240.139707098" watchObservedRunningTime="2026-01-26 19:02:35.585925582 +0000 UTC m=+1240.150832314" Jan 26 19:02:35 crc kubenswrapper[4770]: I0126 19:02:35.606140 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-647c797856-n9jkj" podStartSLOduration=3.194429121 podStartE2EDuration="6.606120726s" podCreationTimestamp="2026-01-26 19:02:29 +0000 UTC" firstStartedPulling="2026-01-26 19:02:30.593314454 +0000 UTC m=+1235.158221186" lastFinishedPulling="2026-01-26 19:02:34.005006059 +0000 UTC m=+1238.569912791" observedRunningTime="2026-01-26 19:02:35.599017601 +0000 UTC m=+1240.163924343" watchObservedRunningTime="2026-01-26 19:02:35.606120726 +0000 UTC m=+1240.171027458" Jan 26 19:02:36 crc kubenswrapper[4770]: I0126 19:02:36.576921 4770 generic.go:334] "Generic (PLEG): container finished" podID="200a66de-48c2-4fad-babc-4e45e99790cd" containerID="1ccec2d55f09f36fd639413394264d95c642e5899e056eb76b6565b818f4a0f3" exitCode=0 Jan 26 19:02:36 crc kubenswrapper[4770]: I0126 19:02:36.577002 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f98bs" event={"ID":"200a66de-48c2-4fad-babc-4e45e99790cd","Type":"ContainerDied","Data":"1ccec2d55f09f36fd639413394264d95c642e5899e056eb76b6565b818f4a0f3"} Jan 26 19:02:38 crc kubenswrapper[4770]: I0126 19:02:38.597831 4770 generic.go:334] "Generic (PLEG): container finished" podID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" containerID="725d45f04a1a6f23a0aa0a8e35fac44c1410e163f48c70408f194d0a3641477a" exitCode=0 Jan 26 19:02:38 crc kubenswrapper[4770]: I0126 19:02:38.597923 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q2sdv" event={"ID":"9d149076-49cc-4a5a-80f8-c34dac1c2b45","Type":"ContainerDied","Data":"725d45f04a1a6f23a0aa0a8e35fac44c1410e163f48c70408f194d0a3641477a"} Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.302195 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f98bs" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429415 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b5pz\" (UniqueName: \"kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429478 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429530 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429564 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429626 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.429787 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data\") pod \"200a66de-48c2-4fad-babc-4e45e99790cd\" (UID: \"200a66de-48c2-4fad-babc-4e45e99790cd\") " Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.430688 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.436910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz" (OuterVolumeSpecName: "kube-api-access-8b5pz") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "kube-api-access-8b5pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.438493 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts" (OuterVolumeSpecName: "scripts") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.452938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.523101 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data" (OuterVolumeSpecName: "config-data") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.532963 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b5pz\" (UniqueName: \"kubernetes.io/projected/200a66de-48c2-4fad-babc-4e45e99790cd-kube-api-access-8b5pz\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.533084 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.533101 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.533169 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/200a66de-48c2-4fad-babc-4e45e99790cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.533203 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.534335 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "200a66de-48c2-4fad-babc-4e45e99790cd" (UID: "200a66de-48c2-4fad-babc-4e45e99790cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.613351 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f98bs" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.613344 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f98bs" event={"ID":"200a66de-48c2-4fad-babc-4e45e99790cd","Type":"ContainerDied","Data":"9a63160420d572afa0a650c5ef481bca307c3d285819006739ef5d8391fc3e94"} Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.613535 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a63160420d572afa0a650c5ef481bca307c3d285819006739ef5d8391fc3e94" Jan 26 19:02:39 crc kubenswrapper[4770]: I0126 19:02:39.634723 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200a66de-48c2-4fad-babc-4e45e99790cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.141850 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.201273 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.201505 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="dnsmasq-dns" containerID="cri-o://a8892a7c83369c1efbbecd25002cc5eec8186c20a1bf4fb34296877cad6d6feb" gracePeriod=10 Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.636439 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:40 crc kubenswrapper[4770]: E0126 19:02:40.637198 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" containerName="cinder-db-sync" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.637217 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" containerName="cinder-db-sync" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.659592 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" containerName="cinder-db-sync" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.660725 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.674638 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.674663 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.674917 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qcpct" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.675127 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.679631 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900682 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.900989 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4lh\" (UniqueName: \"kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.919438 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.921126 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:40 crc kubenswrapper[4770]: I0126 19:02:40.937732 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh54v\" (UniqueName: \"kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002561 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002673 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.002764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.003740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.003784 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.004131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.004225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.004261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.004306 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.004348 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4lh\" (UniqueName: \"kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.025848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.027440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.032263 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.047920 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4lh\" (UniqueName: \"kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.077097 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.105961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.106012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.106046 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.106068 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.106115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh54v\" (UniqueName: \"kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.106167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.107045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.107227 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.107650 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.109429 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.112628 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.114421 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.114569 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.114962 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.129316 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.160631 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh54v\" (UniqueName: \"kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v\") pod \"dnsmasq-dns-bf66fc99-25gns\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213558 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213609 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7sg\" (UniqueName: \"kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213692 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213962 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.213989 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.243925 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.308781 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315347 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315463 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315497 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st7sg\" (UniqueName: \"kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315608 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.315649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.316494 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.317228 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.323801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.325236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.329806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.336772 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.339988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st7sg\" (UniqueName: \"kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg\") pod \"cinder-api-0\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.532806 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.631380 4770 generic.go:334] "Generic (PLEG): container finished" podID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerID="a8892a7c83369c1efbbecd25002cc5eec8186c20a1bf4fb34296877cad6d6feb" exitCode=0 Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.631429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" event={"ID":"b5223e91-68cc-4d7a-91ca-c58e530ef973","Type":"ContainerDied","Data":"a8892a7c83369c1efbbecd25002cc5eec8186c20a1bf4fb34296877cad6d6feb"} Jan 26 19:02:41 crc kubenswrapper[4770]: I0126 19:02:41.828620 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.347060 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q2sdv" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.399048 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.448908 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle\") pod \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.449197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data\") pod \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.449239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v8s4\" (UniqueName: \"kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4\") pod \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.449379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data\") pod \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\" (UID: \"9d149076-49cc-4a5a-80f8-c34dac1c2b45\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.463035 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4" (OuterVolumeSpecName: "kube-api-access-4v8s4") pod "9d149076-49cc-4a5a-80f8-c34dac1c2b45" (UID: "9d149076-49cc-4a5a-80f8-c34dac1c2b45"). InnerVolumeSpecName "kube-api-access-4v8s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.472792 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9d149076-49cc-4a5a-80f8-c34dac1c2b45" (UID: "9d149076-49cc-4a5a-80f8-c34dac1c2b45"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.518991 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.535842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d149076-49cc-4a5a-80f8-c34dac1c2b45" (UID: "9d149076-49cc-4a5a-80f8-c34dac1c2b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.556369 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.556397 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.556409 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v8s4\" (UniqueName: \"kubernetes.io/projected/9d149076-49cc-4a5a-80f8-c34dac1c2b45-kube-api-access-4v8s4\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.596446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data" (OuterVolumeSpecName: "config-data") pod "9d149076-49cc-4a5a-80f8-c34dac1c2b45" (UID: "9d149076-49cc-4a5a-80f8-c34dac1c2b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.646033 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657568 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657659 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb9wh\" (UniqueName: \"kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657775 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657791 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657826 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.657861 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb\") pod \"b5223e91-68cc-4d7a-91ca-c58e530ef973\" (UID: \"b5223e91-68cc-4d7a-91ca-c58e530ef973\") " Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.663286 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-q2sdv" event={"ID":"9d149076-49cc-4a5a-80f8-c34dac1c2b45","Type":"ContainerDied","Data":"d5bcbb8828c800f7e73aa6eeec67b631c46bd6b0b0f0d325f72b092baf28d9e1"} Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.663335 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5bcbb8828c800f7e73aa6eeec67b631c46bd6b0b0f0d325f72b092baf28d9e1" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.663406 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d149076-49cc-4a5a-80f8-c34dac1c2b45-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.663629 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-q2sdv" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.678980 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh" (OuterVolumeSpecName: "kube-api-access-kb9wh") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "kube-api-access-kb9wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.683261 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" event={"ID":"b5223e91-68cc-4d7a-91ca-c58e530ef973","Type":"ContainerDied","Data":"2eba6ded34245e9b2c2fb43d0e4be28c21ae9daa3a71ee375f1ed7e2be6b9c7c"} Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.683342 4770 scope.go:117] "RemoveContainer" containerID="a8892a7c83369c1efbbecd25002cc5eec8186c20a1bf4fb34296877cad6d6feb" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.683508 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.734253 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.764660 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb9wh\" (UniqueName: \"kubernetes.io/projected/b5223e91-68cc-4d7a-91ca-c58e530ef973-kube-api-access-kb9wh\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.771300 4770 scope.go:117] "RemoveContainer" containerID="27781c49422cc1b57d9b88957770b80f522190aaf26026159320aaf1558791ee" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.828296 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.829303 4770 scope.go:117] "RemoveContainer" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.910786 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.911019 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d4478bc7-wx9fs" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-api" containerID="cri-o://bc3f922a8c90ab70df5d9e39eaf37090517994e25776d5cda6209a84eb615cc1" gracePeriod=30 Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.913927 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d4478bc7-wx9fs" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" containerID="cri-o://56bed967675182c3b2fd83364e4a6690c7d3127df5c2fa061c93f26b4908d9ba" gracePeriod=30 Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.921966 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56d4478bc7-wx9fs" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": EOF" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.927774 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c5fff9c7-vsc8j"] Jan 26 19:02:42 crc kubenswrapper[4770]: E0126 19:02:42.928179 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="dnsmasq-dns" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.928196 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="dnsmasq-dns" Jan 26 19:02:42 crc kubenswrapper[4770]: E0126 19:02:42.928240 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="init" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.928246 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="init" Jan 26 19:02:42 crc kubenswrapper[4770]: E0126 19:02:42.928262 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" containerName="glance-db-sync" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.928268 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" containerName="glance-db-sync" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.928468 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" containerName="glance-db-sync" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.928489 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="dnsmasq-dns" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.938260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:42 crc kubenswrapper[4770]: I0126 19:02:42.974568 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5fff9c7-vsc8j"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.093256 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.138584 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-public-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.138912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xb82\" (UniqueName: \"kubernetes.io/projected/061a1ade-3e2c-4fa3-af1d-79119e42b777-kube-api-access-7xb82\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.139206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-ovndb-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.139347 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-httpd-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.139456 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.139554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-combined-ca-bundle\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.139635 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-internal-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.224160 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.242926 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-ovndb-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243007 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-httpd-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243042 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243076 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-combined-ca-bundle\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-internal-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243135 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-public-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xb82\" (UniqueName: \"kubernetes.io/projected/061a1ade-3e2c-4fa3-af1d-79119e42b777-kube-api-access-7xb82\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.243274 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.261819 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.291148 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.318483 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-ovndb-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.323007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.333332 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-public-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.333854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-httpd-config\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.333893 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-combined-ca-bundle\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.336432 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.352058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xb82\" (UniqueName: \"kubernetes.io/projected/061a1ade-3e2c-4fa3-af1d-79119e42b777-kube-api-access-7xb82\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.352154 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/061a1ade-3e2c-4fa3-af1d-79119e42b777-internal-tls-certs\") pod \"neutron-5c5fff9c7-vsc8j\" (UID: \"061a1ade-3e2c-4fa3-af1d-79119e42b777\") " pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.356294 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.360304 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config" (OuterVolumeSpecName: "config") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.407912 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.426310 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5223e91-68cc-4d7a-91ca-c58e530ef973" (UID: "b5223e91-68cc-4d7a-91ca-c58e530ef973"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:43 crc kubenswrapper[4770]: E0126 19:02:43.436876 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.455056 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.455111 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.455123 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.455131 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5223e91-68cc-4d7a-91ca-c58e530ef973-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.590303 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.700963 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.733799 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878674dd9-pkgz7"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.844659 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" path="/var/lib/kubelet/pods/b5223e91-68cc-4d7a-91ca-c58e530ef973/volumes" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.845518 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.851538 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.868223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.870558 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.944940 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerStarted","Data":"44c0870864990a96fb665b964f5fd98709d8ffc985dcf362aa22e4a970bc5118"} Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.966698 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerStarted","Data":"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd"} Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.966873 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="ceilometer-notification-agent" containerID="cri-o://d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc" gracePeriod=30 Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.967124 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.967369 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="proxy-httpd" containerID="cri-o://428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd" gracePeriod=30 Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.967411 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="sg-core" containerID="cri-o://93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe" gracePeriod=30 Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.985762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7mz\" (UniqueName: \"kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.985829 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.985930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.986002 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.986060 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:43 crc kubenswrapper[4770]: I0126 19:02:43.986117 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:43.998082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-545575dfd-bbtbf" event={"ID":"46ff829b-eabe-4d50-a22f-4da3d6cf798f","Type":"ContainerStarted","Data":"a409e8378ad2b6505a9559c3d04a7d764a4c5fce5fe4a209ff7d2801f95890ed"} Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:43.999154 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:43.999500 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.033052 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerStarted","Data":"f365287d8148de3246385cbdddcbd3870394c468c61d914563807b2309002881"} Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.035375 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf66fc99-25gns" event={"ID":"d08781bc-7c50-4b23-87aa-03f6ef0e6b41","Type":"ContainerStarted","Data":"b60935eb795b20537ee0ba47c162b7e2f8cd34b53904189b1a7a36f1bf07a5a9"} Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.039613 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-545575dfd-bbtbf" podStartSLOduration=12.039593067 podStartE2EDuration="12.039593067s" podCreationTimestamp="2026-01-26 19:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:44.036207234 +0000 UTC m=+1248.601113976" watchObservedRunningTime="2026-01-26 19:02:44.039593067 +0000 UTC m=+1248.604499799" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.088969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7mz\" (UniqueName: \"kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.089410 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.089577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.089682 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.089804 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.089907 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.101745 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.102848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.103407 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.103838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.104175 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.156511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7mz\" (UniqueName: \"kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz\") pod \"dnsmasq-dns-84c7cd669f-f6xsz\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.215146 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.848526 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.850520 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.853556 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.853664 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.853946 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dhz8c" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.872908 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.966922 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.968712 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.971238 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:02:44 crc kubenswrapper[4770]: I0126 19:02:44.978686 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56d4478bc7-wx9fs" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.010628 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5fff9c7-vsc8j"] Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.026831 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.026955 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.027006 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.027041 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.027094 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.027144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw7tn\" (UniqueName: \"kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.027186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.037338 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130245 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130301 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130373 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130446 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130477 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw7tn\" (UniqueName: \"kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130617 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130671 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.130722 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8tx7\" (UniqueName: \"kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.131213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.132568 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.152825 4770 generic.go:334] "Generic (PLEG): container finished" podID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerID="428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd" exitCode=0 Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.152871 4770 generic.go:334] "Generic (PLEG): container finished" podID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerID="93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe" exitCode=2 Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.152962 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerDied","Data":"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.152997 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerDied","Data":"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.158726 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.159250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.175807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw7tn\" (UniqueName: \"kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.176988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.188499 4770 generic.go:334] "Generic (PLEG): container finished" podID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerID="56bed967675182c3b2fd83364e4a6690c7d3127df5c2fa061c93f26b4908d9ba" exitCode=0 Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.188569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerDied","Data":"56bed967675182c3b2fd83364e4a6690c7d3127df5c2fa061c93f26b4908d9ba"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.190184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.201074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.201259 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5fff9c7-vsc8j" event={"ID":"061a1ade-3e2c-4fa3-af1d-79119e42b777","Type":"ContainerStarted","Data":"2a6b9f590db5a36099c6f6846466951d8108a901e5d2ae8ce39301e93ad6b029"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.212424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerStarted","Data":"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.233778 4770 generic.go:334] "Generic (PLEG): container finished" podID="d08781bc-7c50-4b23-87aa-03f6ef0e6b41" containerID="fb5df5fe8c56a40ce42b6fff7a1ddd31732ef697a9949d96908c818b99184329" exitCode=0 Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.234403 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf66fc99-25gns" event={"ID":"d08781bc-7c50-4b23-87aa-03f6ef0e6b41","Type":"ContainerDied","Data":"fb5df5fe8c56a40ce42b6fff7a1ddd31732ef697a9949d96908c818b99184329"} Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235178 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235265 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235328 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235373 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.235453 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8tx7\" (UniqueName: \"kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.236071 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.240134 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.240235 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.241427 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.244357 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.247786 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.253478 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.322442 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8tx7\" (UniqueName: \"kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.437919 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.494247 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:45 crc kubenswrapper[4770]: I0126 19:02:45.620490 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.116327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77b47dc986-cqqn6" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.235343 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.247606 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f47668778-9m4hm" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon-log" containerID="cri-o://424e384591d9962673acb328847231755ae004f2ec839d227ef88b67b1f4fa9e" gracePeriod=30 Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.247759 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f47668778-9m4hm" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" containerID="cri-o://00d8410891c3266be02e94a7492de06621996331830bc8b8d3cfe1d17da1f3fb" gracePeriod=30 Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.286918 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerStarted","Data":"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.304362 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf66fc99-25gns" event={"ID":"d08781bc-7c50-4b23-87aa-03f6ef0e6b41","Type":"ContainerDied","Data":"b60935eb795b20537ee0ba47c162b7e2f8cd34b53904189b1a7a36f1bf07a5a9"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.304404 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60935eb795b20537ee0ba47c162b7e2f8cd34b53904189b1a7a36f1bf07a5a9" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.332764 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerStarted","Data":"67489b7c08648a6b6e4d621c1f3728a8eba595ee5425af3a116e6ada81b58764"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.339371 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.341509 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5fff9c7-vsc8j" event={"ID":"061a1ade-3e2c-4fa3-af1d-79119e42b777","Type":"ContainerStarted","Data":"030776f8a76bb7c7152ccece4b5fe0b4ab80620603f16bc2c6431b37d0497512"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.354880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerStarted","Data":"102eb892f1022d60f9dd531b5b4bbe8fac91a0df1bdd9a8ac38dada4a4116e4b"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.354912 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerStarted","Data":"f6ce24682bed44dc5ed33747338b07e37aadb75a854e1507221e0e0a6b21305c"} Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.386815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.387203 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.387238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.387257 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.387289 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh54v\" (UniqueName: \"kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.387319 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0\") pod \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\" (UID: \"d08781bc-7c50-4b23-87aa-03f6ef0e6b41\") " Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.442082 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v" (OuterVolumeSpecName: "kube-api-access-gh54v") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "kube-api-access-gh54v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.521820 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh54v\" (UniqueName: \"kubernetes.io/projected/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-kube-api-access-gh54v\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.548055 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.625931 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.643331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.649636 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.692907 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.704265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config" (OuterVolumeSpecName: "config") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.727475 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.727507 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.727516 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.790451 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d08781bc-7c50-4b23-87aa-03f6ef0e6b41" (UID: "d08781bc-7c50-4b23-87aa-03f6ef0e6b41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:46 crc kubenswrapper[4770]: I0126 19:02:46.828907 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d08781bc-7c50-4b23-87aa-03f6ef0e6b41-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.226975 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.239588 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7878674dd9-pkgz7" podUID="b5223e91-68cc-4d7a-91ca-c58e530ef973" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: i/o timeout" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.377862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerStarted","Data":"65ad1016b073772891d25343b6b4a4673303d22dd2de0dddbc7df6ba2b4b098c"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.379599 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerStarted","Data":"046e3114a66ab144904d2ea27364be2b72db1074b2aa02772f3fb049d2fb32cb"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.420945 4770 generic.go:334] "Generic (PLEG): container finished" podID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerID="102eb892f1022d60f9dd531b5b4bbe8fac91a0df1bdd9a8ac38dada4a4116e4b" exitCode=0 Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.421009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerDied","Data":"102eb892f1022d60f9dd531b5b4bbe8fac91a0df1bdd9a8ac38dada4a4116e4b"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.421034 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerStarted","Data":"04d964d355e8e8ea6edd62ea5207b51e10c5fcb29494bcd4a887077bec5b6f38"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.422100 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.442945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerStarted","Data":"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.443121 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api-log" containerID="cri-o://79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7" gracePeriod=30 Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.443360 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.443396 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api" containerID="cri-o://b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac" gracePeriod=30 Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.481892 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.488077 4770 generic.go:334] "Generic (PLEG): container finished" podID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerID="bc3f922a8c90ab70df5d9e39eaf37090517994e25776d5cda6209a84eb615cc1" exitCode=0 Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.488168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerDied","Data":"bc3f922a8c90ab70df5d9e39eaf37090517994e25776d5cda6209a84eb615cc1"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.501445 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf66fc99-25gns" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.502771 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5fff9c7-vsc8j" event={"ID":"061a1ade-3e2c-4fa3-af1d-79119e42b777","Type":"ContainerStarted","Data":"f2866904f70391d5a88f3c22e602518f932c9b79ecbe4df68995ed4c01033416"} Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.502945 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.510295 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" podStartSLOduration=4.51027323 podStartE2EDuration="4.51027323s" podCreationTimestamp="2026-01-26 19:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:47.489315735 +0000 UTC m=+1252.054222457" watchObservedRunningTime="2026-01-26 19:02:47.51027323 +0000 UTC m=+1252.075179962" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.580265 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c5fff9c7-vsc8j" podStartSLOduration=5.580248912 podStartE2EDuration="5.580248912s" podCreationTimestamp="2026-01-26 19:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:47.579569224 +0000 UTC m=+1252.144475956" watchObservedRunningTime="2026-01-26 19:02:47.580248912 +0000 UTC m=+1252.145155644" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.604597 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.60457447 podStartE2EDuration="6.60457447s" podCreationTimestamp="2026-01-26 19:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:47.546442404 +0000 UTC m=+1252.111349136" watchObservedRunningTime="2026-01-26 19:02:47.60457447 +0000 UTC m=+1252.169481212" Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.643588 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.694830 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.736425 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf66fc99-25gns"] Jan 26 19:02:47 crc kubenswrapper[4770]: I0126 19:02:47.779991 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08781bc-7c50-4b23-87aa-03f6ef0e6b41" path="/var/lib/kubelet/pods/d08781bc-7c50-4b23-87aa-03f6ef0e6b41/volumes" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.087071 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.329001 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.395784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.395830 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.395854 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.395884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.396007 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.396031 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.396062 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x6m5\" (UniqueName: \"kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5\") pod \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\" (UID: \"e85972ec-8d1c-4d0a-9696-a8c2bae4607f\") " Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.400592 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5" (OuterVolumeSpecName: "kube-api-access-7x6m5") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "kube-api-access-7x6m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.426879 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.477977 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config" (OuterVolumeSpecName: "config") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.498882 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x6m5\" (UniqueName: \"kubernetes.io/projected/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-kube-api-access-7x6m5\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.498918 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.498928 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.505809 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.507751 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.520135 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.551972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d4478bc7-wx9fs" event={"ID":"e85972ec-8d1c-4d0a-9696-a8c2bae4607f","Type":"ContainerDied","Data":"750520402cb7212668cd6e898f0d80efee21bf8231dc2b0712eaf5d35a0d3289"} Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.552026 4770 scope.go:117] "RemoveContainer" containerID="56bed967675182c3b2fd83364e4a6690c7d3127df5c2fa061c93f26b4908d9ba" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.552163 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d4478bc7-wx9fs" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.570080 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerStarted","Data":"5dde23ef11079fdb4529bc87f1f643fcc3e6605fb6c38075cd9a56fe979c5b48"} Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.574271 4770 generic.go:334] "Generic (PLEG): container finished" podID="eb633454-1a38-4280-a3d4-8825f169e03e" containerID="79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7" exitCode=143 Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.574334 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerDied","Data":"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7"} Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.597244 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerStarted","Data":"896ba8a8849e0da6046913e6ff16538d82573ee21de21bd8e4a0dcd7423595af"} Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.600774 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.600801 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.600811 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.616834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e85972ec-8d1c-4d0a-9696-a8c2bae4607f" (UID: "e85972ec-8d1c-4d0a-9696-a8c2bae4607f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.624187 4770 scope.go:117] "RemoveContainer" containerID="bc3f922a8c90ab70df5d9e39eaf37090517994e25776d5cda6209a84eb615cc1" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.629621 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.999521435 podStartE2EDuration="8.629602397s" podCreationTimestamp="2026-01-26 19:02:40 +0000 UTC" firstStartedPulling="2026-01-26 19:02:43.424804005 +0000 UTC m=+1247.989710737" lastFinishedPulling="2026-01-26 19:02:44.054884967 +0000 UTC m=+1248.619791699" observedRunningTime="2026-01-26 19:02:48.612621801 +0000 UTC m=+1253.177528533" watchObservedRunningTime="2026-01-26 19:02:48.629602397 +0000 UTC m=+1253.194509119" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.702931 4770 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85972ec-8d1c-4d0a-9696-a8c2bae4607f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.890998 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:48 crc kubenswrapper[4770]: I0126 19:02:48.903401 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56d4478bc7-wx9fs"] Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.401590 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f47668778-9m4hm" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:45480->10.217.0.160:8443: read: connection reset by peer" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.456940 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522521 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28cnp\" (UniqueName: \"kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522590 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522668 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522760 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522797 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522845 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.522899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd\") pod \"859f9d5b-265e-4d91-a4e1-faca291a3073\" (UID: \"859f9d5b-265e-4d91-a4e1-faca291a3073\") " Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.523596 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.526494 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.547158 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts" (OuterVolumeSpecName: "scripts") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.547293 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp" (OuterVolumeSpecName: "kube-api-access-28cnp") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "kube-api-access-28cnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.572818 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.599015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.616581 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerStarted","Data":"d5314a5b831ada5414db2074799b52fe9ca73547fd0cb30d8bfd51520cd005d9"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.619107 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerStarted","Data":"f9bbde59fd2dce4e549e77bb5ecbf6465d3973e2902c6e0d2254f36436deef4a"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.619298 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-log" containerID="cri-o://5dde23ef11079fdb4529bc87f1f643fcc3e6605fb6c38075cd9a56fe979c5b48" gracePeriod=30 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.619964 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-httpd" containerID="cri-o://f9bbde59fd2dce4e549e77bb5ecbf6465d3973e2902c6e0d2254f36436deef4a" gracePeriod=30 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.621796 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data" (OuterVolumeSpecName: "config-data") pod "859f9d5b-265e-4d91-a4e1-faca291a3073" (UID: "859f9d5b-265e-4d91-a4e1-faca291a3073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.629579 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" exitCode=1 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.629658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerDied","Data":"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.629719 4770 scope.go:117] "RemoveContainer" containerID="8039aeb933354a55851cbac59d3457b11d27b2949d823899c0c8600000166ed4" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.630531 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.630745 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.630949 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ba7a2e1d-7c6b-4d89-ac01-5a93fb071444)\"" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.630980 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/859f9d5b-265e-4d91-a4e1-faca291a3073-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.630992 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28cnp\" (UniqueName: \"kubernetes.io/projected/859f9d5b-265e-4d91-a4e1-faca291a3073-kube-api-access-28cnp\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.631003 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.631030 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.631039 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.631048 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859f9d5b-265e-4d91-a4e1-faca291a3073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.638170 4770 generic.go:334] "Generic (PLEG): container finished" podID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerID="00d8410891c3266be02e94a7492de06621996331830bc8b8d3cfe1d17da1f3fb" exitCode=0 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.638263 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerDied","Data":"00d8410891c3266be02e94a7492de06621996331830bc8b8d3cfe1d17da1f3fb"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.651017 4770 generic.go:334] "Generic (PLEG): container finished" podID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerID="d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc" exitCode=0 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.651107 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerDied","Data":"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.651143 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"859f9d5b-265e-4d91-a4e1-faca291a3073","Type":"ContainerDied","Data":"7771d14ef808b96f2e5dcdace58a42e8faaa9c2d4242f073a2c0dbb6831dacb8"} Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.651217 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.674284 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.674264493 podStartE2EDuration="6.674264493s" podCreationTimestamp="2026-01-26 19:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:49.657116822 +0000 UTC m=+1254.222023564" watchObservedRunningTime="2026-01-26 19:02:49.674264493 +0000 UTC m=+1254.239171225" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.718616 4770 scope.go:117] "RemoveContainer" containerID="428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.765871 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.783823 4770 scope.go:117] "RemoveContainer" containerID="93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.804833 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" path="/var/lib/kubelet/pods/e85972ec-8d1c-4d0a-9696-a8c2bae4607f/volumes" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.805520 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.805559 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806768 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08781bc-7c50-4b23-87aa-03f6ef0e6b41" containerName="init" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806790 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08781bc-7c50-4b23-87aa-03f6ef0e6b41" containerName="init" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806804 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="sg-core" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806812 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="sg-core" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806827 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="proxy-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806835 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="proxy-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806845 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="ceilometer-notification-agent" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806851 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="ceilometer-notification-agent" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806861 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806868 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.806881 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-api" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.806887 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-api" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807061 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-api" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807070 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="sg-core" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807084 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="proxy-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807092 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08781bc-7c50-4b23-87aa-03f6ef0e6b41" containerName="init" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807108 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85972ec-8d1c-4d0a-9696-a8c2bae4607f" containerName="neutron-httpd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.807121 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" containerName="ceilometer-notification-agent" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.813094 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.813224 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.817983 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.818159 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.857078 4770 scope.go:117] "RemoveContainer" containerID="d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.859114 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-545575dfd-bbtbf" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.916196 4770 scope.go:117] "RemoveContainer" containerID="428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.918330 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd\": container with ID starting with 428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd not found: ID does not exist" containerID="428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.918394 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd"} err="failed to get container status \"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd\": rpc error: code = NotFound desc = could not find container \"428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd\": container with ID starting with 428133a165616e1ee8794de85bb2f00b71c8125c1d638f48a386fbcd6d729ddd not found: ID does not exist" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.918433 4770 scope.go:117] "RemoveContainer" containerID="93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.919228 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe\": container with ID starting with 93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe not found: ID does not exist" containerID="93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.919258 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe"} err="failed to get container status \"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe\": rpc error: code = NotFound desc = could not find container \"93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe\": container with ID starting with 93f6bb3a27e36b6e24ae0aa4c59bfcfd632484419a97bdb6a4db416cf3c7c2fe not found: ID does not exist" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.919280 4770 scope.go:117] "RemoveContainer" containerID="d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc" Jan 26 19:02:49 crc kubenswrapper[4770]: E0126 19:02:49.919742 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc\": container with ID starting with d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc not found: ID does not exist" containerID="d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.919776 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc"} err="failed to get container status \"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc\": rpc error: code = NotFound desc = could not find container \"d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc\": container with ID starting with d64232cb66735aacff88abae1735029e7a65dc66c72e47bb2bf0d882b50a1efc not found: ID does not exist" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.937109 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.937613 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b6b9fb758-6nb49" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api-log" containerID="cri-o://7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db" gracePeriod=30 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.938127 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b6b9fb758-6nb49" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api" containerID="cri-o://57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a" gracePeriod=30 Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951299 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbgn\" (UniqueName: \"kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951446 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:49 crc kubenswrapper[4770]: I0126 19:02:49.951536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbgn\" (UniqueName: \"kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053501 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053534 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.053582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.054030 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.055107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.061290 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.063293 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.064319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.073431 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.079459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbgn\" (UniqueName: \"kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn\") pod \"ceilometer-0\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.168426 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.734083 4770 generic.go:334] "Generic (PLEG): container finished" podID="a275885a-bc39-45eb-8375-9ee5b5059744" containerID="f9bbde59fd2dce4e549e77bb5ecbf6465d3973e2902c6e0d2254f36436deef4a" exitCode=0 Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.734328 4770 generic.go:334] "Generic (PLEG): container finished" podID="a275885a-bc39-45eb-8375-9ee5b5059744" containerID="5dde23ef11079fdb4529bc87f1f643fcc3e6605fb6c38075cd9a56fe979c5b48" exitCode=143 Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.734455 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerDied","Data":"f9bbde59fd2dce4e549e77bb5ecbf6465d3973e2902c6e0d2254f36436deef4a"} Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.734515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerDied","Data":"5dde23ef11079fdb4529bc87f1f643fcc3e6605fb6c38075cd9a56fe979c5b48"} Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.791686 4770 generic.go:334] "Generic (PLEG): container finished" podID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerID="7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db" exitCode=143 Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.791774 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerDied","Data":"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db"} Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.799920 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerStarted","Data":"f78dfa1d3a5134b539ce9c2cf7c5586d64c8ca56933ea624fa5f3aaea9ea8a87"} Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.800083 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-log" containerID="cri-o://d5314a5b831ada5414db2074799b52fe9ca73547fd0cb30d8bfd51520cd005d9" gracePeriod=30 Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.800527 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-httpd" containerID="cri-o://f78dfa1d3a5134b539ce9c2cf7c5586d64c8ca56933ea624fa5f3aaea9ea8a87" gracePeriod=30 Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.802565 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.859921 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.85990011 podStartE2EDuration="7.85990011s" podCreationTimestamp="2026-01-26 19:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:50.851851819 +0000 UTC m=+1255.416758551" watchObservedRunningTime="2026-01-26 19:02:50.85990011 +0000 UTC m=+1255.424806842" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877308 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877426 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877510 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw7tn\" (UniqueName: \"kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877545 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877591 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877678 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.877721 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run\") pod \"a275885a-bc39-45eb-8375-9ee5b5059744\" (UID: \"a275885a-bc39-45eb-8375-9ee5b5059744\") " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.879016 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.881126 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs" (OuterVolumeSpecName: "logs") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.884124 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.887807 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts" (OuterVolumeSpecName: "scripts") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.889194 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.892398 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn" (OuterVolumeSpecName: "kube-api-access-dw7tn") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "kube-api-access-dw7tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.920756 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.957602 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data" (OuterVolumeSpecName: "config-data") pod "a275885a-bc39-45eb-8375-9ee5b5059744" (UID: "a275885a-bc39-45eb-8375-9ee5b5059744"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981033 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw7tn\" (UniqueName: \"kubernetes.io/projected/a275885a-bc39-45eb-8375-9ee5b5059744-kube-api-access-dw7tn\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981068 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981078 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981105 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981114 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981123 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a275885a-bc39-45eb-8375-9ee5b5059744-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:50 crc kubenswrapper[4770]: I0126 19:02:50.981130 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a275885a-bc39-45eb-8375-9ee5b5059744-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.002925 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.083312 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.310500 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.592149 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.783005 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859f9d5b-265e-4d91-a4e1-faca291a3073" path="/var/lib/kubelet/pods/859f9d5b-265e-4d91-a4e1-faca291a3073/volumes" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.812274 4770 generic.go:334] "Generic (PLEG): container finished" podID="370028b3-c0af-463f-8825-b6d50f82849a" containerID="f78dfa1d3a5134b539ce9c2cf7c5586d64c8ca56933ea624fa5f3aaea9ea8a87" exitCode=0 Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.812533 4770 generic.go:334] "Generic (PLEG): container finished" podID="370028b3-c0af-463f-8825-b6d50f82849a" containerID="d5314a5b831ada5414db2074799b52fe9ca73547fd0cb30d8bfd51520cd005d9" exitCode=143 Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.812813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerDied","Data":"f78dfa1d3a5134b539ce9c2cf7c5586d64c8ca56933ea624fa5f3aaea9ea8a87"} Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.813357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerDied","Data":"d5314a5b831ada5414db2074799b52fe9ca73547fd0cb30d8bfd51520cd005d9"} Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.816278 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a275885a-bc39-45eb-8375-9ee5b5059744","Type":"ContainerDied","Data":"046e3114a66ab144904d2ea27364be2b72db1074b2aa02772f3fb049d2fb32cb"} Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.816545 4770 scope.go:117] "RemoveContainer" containerID="f9bbde59fd2dce4e549e77bb5ecbf6465d3973e2902c6e0d2254f36436deef4a" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.816509 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.832082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerStarted","Data":"506924be1dae20ac361e6e39e015c44250f223bb39dae688710dc71327e2346d"} Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.832119 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerStarted","Data":"0f12c613bc6c746fbd88d4d2149bea876f2724f3d59784dd26de6985a2ed33b8"} Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.903042 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.903934 4770 scope.go:117] "RemoveContainer" containerID="5dde23ef11079fdb4529bc87f1f643fcc3e6605fb6c38075cd9a56fe979c5b48" Jan 26 19:02:51 crc kubenswrapper[4770]: I0126 19:02:51.940984 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.003344 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: E0126 19:02:52.003854 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.003873 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: E0126 19:02:52.003894 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.003901 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.004134 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.004157 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.005487 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.009725 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.010680 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.018853 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.021642 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.037138 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.111596 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.111925 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.111961 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112034 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112050 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112140 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8tx7\" (UniqueName: \"kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112179 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"370028b3-c0af-463f-8825-b6d50f82849a\" (UID: \"370028b3-c0af-463f-8825-b6d50f82849a\") " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112431 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112473 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112534 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcm72\" (UniqueName: \"kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112571 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112588 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112635 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.112670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.116117 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs" (OuterVolumeSpecName: "logs") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.116242 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.122224 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7" (OuterVolumeSpecName: "kube-api-access-s8tx7") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "kube-api-access-s8tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.127526 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts" (OuterVolumeSpecName: "scripts") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.127855 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.182540 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.184113 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data" (OuterVolumeSpecName: "config-data") pod "370028b3-c0af-463f-8825-b6d50f82849a" (UID: "370028b3-c0af-463f-8825-b6d50f82849a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214480 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcm72\" (UniqueName: \"kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214567 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214668 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214727 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214794 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214806 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214816 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214824 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/370028b3-c0af-463f-8825-b6d50f82849a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214834 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/370028b3-c0af-463f-8825-b6d50f82849a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214843 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8tx7\" (UniqueName: \"kubernetes.io/projected/370028b3-c0af-463f-8825-b6d50f82849a-kube-api-access-s8tx7\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.214862 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.216097 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.216328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.221091 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.223516 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.227784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.229507 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.244890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.250317 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcm72\" (UniqueName: \"kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.282183 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.292098 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.316542 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.334785 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.822824 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.823097 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.823750 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:02:52 crc kubenswrapper[4770]: E0126 19:02:52.823952 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ba7a2e1d-7c6b-4d89-ac01-5a93fb071444)\"" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.852536 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerStarted","Data":"6e2e5f796ed311d45c424e65ce768aea36774e47683d31c5132a9fdcfee26914"} Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.852577 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerStarted","Data":"ad1f14d452d5187b0da4c43ba392d1284dabf016d2394d16a6575cc46e44cf64"} Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.858495 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"370028b3-c0af-463f-8825-b6d50f82849a","Type":"ContainerDied","Data":"65ad1016b073772891d25343b6b4a4673303d22dd2de0dddbc7df6ba2b4b098c"} Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.858535 4770 scope.go:117] "RemoveContainer" containerID="f78dfa1d3a5134b539ce9c2cf7c5586d64c8ca56933ea624fa5f3aaea9ea8a87" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.858622 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.864890 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="probe" containerID="cri-o://896ba8a8849e0da6046913e6ff16538d82573ee21de21bd8e4a0dcd7423595af" gracePeriod=30 Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.864681 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="cinder-scheduler" containerID="cri-o://67489b7c08648a6b6e4d621c1f3728a8eba595ee5425af3a116e6ada81b58764" gracePeriod=30 Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.896875 4770 scope.go:117] "RemoveContainer" containerID="d5314a5b831ada5414db2074799b52fe9ca73547fd0cb30d8bfd51520cd005d9" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.920828 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.927573 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.941780 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.953621 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:52 crc kubenswrapper[4770]: E0126 19:02:52.954138 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.954165 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: E0126 19:02:52.954205 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.954215 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.954508 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-log" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.954540 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="370028b3-c0af-463f-8825-b6d50f82849a" containerName="glance-httpd" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.955992 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.960294 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.960611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:02:52 crc kubenswrapper[4770]: I0126 19:02:52.965981 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.042875 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.042998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043158 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45lg\" (UniqueName: \"kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043231 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043398 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043436 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.043465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.144982 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145045 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45lg\" (UniqueName: \"kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145148 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.145258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.146435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.146924 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.150069 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.154103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.154677 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.158178 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.158315 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.165510 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b6b9fb758-6nb49" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.172:9311/healthcheck\": read tcp 10.217.0.2:46394->10.217.0.172:9311: read: connection reset by peer" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.165540 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b6b9fb758-6nb49" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9311/healthcheck\": read tcp 10.217.0.2:46378->10.217.0.172:9311: read: connection reset by peer" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.167044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45lg\" (UniqueName: \"kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.190690 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.296015 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.777306 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370028b3-c0af-463f-8825-b6d50f82849a" path="/var/lib/kubelet/pods/370028b3-c0af-463f-8825-b6d50f82849a/volumes" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.778511 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a275885a-bc39-45eb-8375-9ee5b5059744" path="/var/lib/kubelet/pods/a275885a-bc39-45eb-8375-9ee5b5059744/volumes" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.855263 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.894112 4770 generic.go:334] "Generic (PLEG): container finished" podID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerID="57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a" exitCode=0 Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.894162 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b6b9fb758-6nb49" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.894225 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerDied","Data":"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a"} Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.894255 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b6b9fb758-6nb49" event={"ID":"fe2ce7f1-97a3-42c4-a619-19ee33fee046","Type":"ContainerDied","Data":"329f25b12499781371ee2d0d7dca387a4e0cbeb752591c4d378eb72de4384871"} Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.894272 4770 scope.go:117] "RemoveContainer" containerID="57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.907275 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerStarted","Data":"50abb1bee56ee2afd5d4c9d2af80fbe4a1d67cc0ac1abd6b23ed7fc939b9880f"} Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.907314 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerStarted","Data":"d9d4d4c1c094a73473f510a94040db6c954d66434dd6c0b068d6907d5fffe243"} Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.965051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs\") pod \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.965203 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cnm\" (UniqueName: \"kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm\") pod \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.965247 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle\") pod \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.965317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom\") pod \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.965459 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data\") pod \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\" (UID: \"fe2ce7f1-97a3-42c4-a619-19ee33fee046\") " Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.966210 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs" (OuterVolumeSpecName: "logs") pod "fe2ce7f1-97a3-42c4-a619-19ee33fee046" (UID: "fe2ce7f1-97a3-42c4-a619-19ee33fee046"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.966914 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2ce7f1-97a3-42c4-a619-19ee33fee046-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.971017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe2ce7f1-97a3-42c4-a619-19ee33fee046" (UID: "fe2ce7f1-97a3-42c4-a619-19ee33fee046"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.971283 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm" (OuterVolumeSpecName: "kube-api-access-f5cnm") pod "fe2ce7f1-97a3-42c4-a619-19ee33fee046" (UID: "fe2ce7f1-97a3-42c4-a619-19ee33fee046"). InnerVolumeSpecName "kube-api-access-f5cnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.977902 4770 scope.go:117] "RemoveContainer" containerID="7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db" Jan 26 19:02:53 crc kubenswrapper[4770]: I0126 19:02:53.994990 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe2ce7f1-97a3-42c4-a619-19ee33fee046" (UID: "fe2ce7f1-97a3-42c4-a619-19ee33fee046"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.015573 4770 scope.go:117] "RemoveContainer" containerID="57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a" Jan 26 19:02:54 crc kubenswrapper[4770]: E0126 19:02:54.016190 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a\": container with ID starting with 57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a not found: ID does not exist" containerID="57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.016245 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a"} err="failed to get container status \"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a\": rpc error: code = NotFound desc = could not find container \"57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a\": container with ID starting with 57d49934de184c33ca2b82310e0fcfae9e731004dbb0a14c7ea990c16002b12a not found: ID does not exist" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.016281 4770 scope.go:117] "RemoveContainer" containerID="7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.016449 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data" (OuterVolumeSpecName: "config-data") pod "fe2ce7f1-97a3-42c4-a619-19ee33fee046" (UID: "fe2ce7f1-97a3-42c4-a619-19ee33fee046"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:54 crc kubenswrapper[4770]: E0126 19:02:54.016786 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db\": container with ID starting with 7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db not found: ID does not exist" containerID="7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.016812 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db"} err="failed to get container status \"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db\": rpc error: code = NotFound desc = could not find container \"7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db\": container with ID starting with 7ab669c7eb294101cb517f812cd89b2f0aecad05b5edda0145652878e1e862db not found: ID does not exist" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.068875 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.068907 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cnm\" (UniqueName: \"kubernetes.io/projected/fe2ce7f1-97a3-42c4-a619-19ee33fee046-kube-api-access-f5cnm\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.068919 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.068929 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe2ce7f1-97a3-42c4-a619-19ee33fee046-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.082736 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:02:54 crc kubenswrapper[4770]: W0126 19:02:54.105838 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece838e9_4831_4ff8_abac_6e7a228c76a0.slice/crio-11fd1e788fb6dca44e501fbf20aee6153ca379390fff6e788d040357c7fcd1a2 WatchSource:0}: Error finding container 11fd1e788fb6dca44e501fbf20aee6153ca379390fff6e788d040357c7fcd1a2: Status 404 returned error can't find the container with id 11fd1e788fb6dca44e501fbf20aee6153ca379390fff6e788d040357c7fcd1a2 Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.216836 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.291633 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.322309 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b6b9fb758-6nb49"] Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.349488 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.349894 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="dnsmasq-dns" containerID="cri-o://6e8b555a7423b88b024771946627a03b8c05fae48df5eeddb563d408fa362ac4" gracePeriod=10 Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.926306 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.928175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerStarted","Data":"11fd1e788fb6dca44e501fbf20aee6153ca379390fff6e788d040357c7fcd1a2"} Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.932276 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerStarted","Data":"8639123eccd6144288e4f6c20dfd9483cdcc95cf59c6b3caa5831f8a85d349c2"} Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.932472 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.935956 4770 generic.go:334] "Generic (PLEG): container finished" podID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerID="896ba8a8849e0da6046913e6ff16538d82573ee21de21bd8e4a0dcd7423595af" exitCode=0 Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.936012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerDied","Data":"896ba8a8849e0da6046913e6ff16538d82573ee21de21bd8e4a0dcd7423595af"} Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.958774 4770 generic.go:334] "Generic (PLEG): container finished" podID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerID="6e8b555a7423b88b024771946627a03b8c05fae48df5eeddb563d408fa362ac4" exitCode=0 Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.958846 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" event={"ID":"15ebf879-39fd-4f97-8d59-053c1a600e85","Type":"ContainerDied","Data":"6e8b555a7423b88b024771946627a03b8c05fae48df5eeddb563d408fa362ac4"} Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.958878 4770 scope.go:117] "RemoveContainer" containerID="6e8b555a7423b88b024771946627a03b8c05fae48df5eeddb563d408fa362ac4" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.958979 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c4bbfdc-rppnp" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.983996 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerStarted","Data":"6bd1d45a5d8ccbafc12d51c71edd786325e8f0e5594ea365165c6be157504471"} Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.988421 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.521018744 podStartE2EDuration="5.988402447s" podCreationTimestamp="2026-01-26 19:02:49 +0000 UTC" firstStartedPulling="2026-01-26 19:02:50.886367837 +0000 UTC m=+1255.451274569" lastFinishedPulling="2026-01-26 19:02:54.35375154 +0000 UTC m=+1258.918658272" observedRunningTime="2026-01-26 19:02:54.980314204 +0000 UTC m=+1259.545220956" watchObservedRunningTime="2026-01-26 19:02:54.988402447 +0000 UTC m=+1259.553309179" Jan 26 19:02:54 crc kubenswrapper[4770]: I0126 19:02:54.993438 4770 scope.go:117] "RemoveContainer" containerID="2ef3004668b2d9e8f4d28ffb56fd009badc36ce84dbf78db48710584335f8cfd" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.011027 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.011007717 podStartE2EDuration="4.011007717s" podCreationTimestamp="2026-01-26 19:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:55.005238389 +0000 UTC m=+1259.570145121" watchObservedRunningTime="2026-01-26 19:02:55.011007717 +0000 UTC m=+1259.575914449" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013425 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blr6\" (UniqueName: \"kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013534 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013555 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013576 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013646 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.013719 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb\") pod \"15ebf879-39fd-4f97-8d59-053c1a600e85\" (UID: \"15ebf879-39fd-4f97-8d59-053c1a600e85\") " Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.019905 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6" (OuterVolumeSpecName: "kube-api-access-2blr6") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "kube-api-access-2blr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.070663 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.093434 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config" (OuterVolumeSpecName: "config") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.094534 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.110764 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.112936 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15ebf879-39fd-4f97-8d59-053c1a600e85" (UID: "15ebf879-39fd-4f97-8d59-053c1a600e85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117333 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117363 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117376 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blr6\" (UniqueName: \"kubernetes.io/projected/15ebf879-39fd-4f97-8d59-053c1a600e85-kube-api-access-2blr6\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117390 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117400 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.117411 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15ebf879-39fd-4f97-8d59-053c1a600e85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.315503 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.332716 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578c4bbfdc-rppnp"] Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.783176 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" path="/var/lib/kubelet/pods/15ebf879-39fd-4f97-8d59-053c1a600e85/volumes" Jan 26 19:02:55 crc kubenswrapper[4770]: I0126 19:02:55.784105 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" path="/var/lib/kubelet/pods/fe2ce7f1-97a3-42c4-a619-19ee33fee046/volumes" Jan 26 19:02:56 crc kubenswrapper[4770]: I0126 19:02:56.017450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerStarted","Data":"269364b51e44a84af09018313383df30b0914867186361e7273ef6697cc6aad7"} Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.038205 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerStarted","Data":"4705719a97fcaaa1fbf1180abe94cffd0cc6f7b492dcbeacdf5f1a4d4a4363d2"} Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.048018 4770 generic.go:334] "Generic (PLEG): container finished" podID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerID="67489b7c08648a6b6e4d621c1f3728a8eba595ee5425af3a116e6ada81b58764" exitCode=0 Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.048062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerDied","Data":"67489b7c08648a6b6e4d621c1f3728a8eba595ee5425af3a116e6ada81b58764"} Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.048088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2bd0d21b-c128-4993-9b91-d41dea49e2b6","Type":"ContainerDied","Data":"44c0870864990a96fb665b964f5fd98709d8ffc985dcf362aa22e4a970bc5118"} Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.048101 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44c0870864990a96fb665b964f5fd98709d8ffc985dcf362aa22e4a970bc5118" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.059495 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.059475217 podStartE2EDuration="5.059475217s" podCreationTimestamp="2026-01-26 19:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:02:57.056944779 +0000 UTC m=+1261.621851501" watchObservedRunningTime="2026-01-26 19:02:57.059475217 +0000 UTC m=+1261.624381949" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.066059 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.159758 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.159810 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4lh\" (UniqueName: \"kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.159999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.160088 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.160174 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.160216 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.160264 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle\") pod \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\" (UID: \"2bd0d21b-c128-4993-9b91-d41dea49e2b6\") " Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.160668 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bd0d21b-c128-4993-9b91-d41dea49e2b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.166053 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.166080 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts" (OuterVolumeSpecName: "scripts") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.166726 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh" (OuterVolumeSpecName: "kube-api-access-dk4lh") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "kube-api-access-dk4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.226497 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.262565 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.262611 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4lh\" (UniqueName: \"kubernetes.io/projected/2bd0d21b-c128-4993-9b91-d41dea49e2b6-kube-api-access-dk4lh\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.262627 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.262639 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.285637 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data" (OuterVolumeSpecName: "config-data") pod "2bd0d21b-c128-4993-9b91-d41dea49e2b6" (UID: "2bd0d21b-c128-4993-9b91-d41dea49e2b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:02:57 crc kubenswrapper[4770]: I0126 19:02:57.363958 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd0d21b-c128-4993-9b91-d41dea49e2b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.054774 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.100102 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.108809 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129014 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129655 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api-log" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129682 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api-log" Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129743 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="probe" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129754 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="probe" Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129775 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129785 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api" Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129797 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="cinder-scheduler" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129805 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="cinder-scheduler" Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129825 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="dnsmasq-dns" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129833 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="dnsmasq-dns" Jan 26 19:02:58 crc kubenswrapper[4770]: E0126 19:02:58.129861 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="init" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.129870 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="init" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.130097 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.130129 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ebf879-39fd-4f97-8d59-053c1a600e85" containerName="dnsmasq-dns" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.130153 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="cinder-scheduler" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.130162 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" containerName="probe" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.130177 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2ce7f1-97a3-42c4-a619-19ee33fee046" containerName="barbican-api-log" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.131496 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.134766 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.137480 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183038 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183104 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf3cbbc4-d990-4d7d-9514-28beda8c084e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183188 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183218 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48ckp\" (UniqueName: \"kubernetes.io/projected/bf3cbbc4-d990-4d7d-9514-28beda8c084e-kube-api-access-48ckp\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.183280 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.284866 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.284952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf3cbbc4-d990-4d7d-9514-28beda8c084e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.284998 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.285066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.285107 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48ckp\" (UniqueName: \"kubernetes.io/projected/bf3cbbc4-d990-4d7d-9514-28beda8c084e-kube-api-access-48ckp\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.285176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.286407 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf3cbbc4-d990-4d7d-9514-28beda8c084e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.290648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-scripts\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.291086 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.295991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-config-data\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.297067 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3cbbc4-d990-4d7d-9514-28beda8c084e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.304804 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48ckp\" (UniqueName: \"kubernetes.io/projected/bf3cbbc4-d990-4d7d-9514-28beda8c084e-kube-api-access-48ckp\") pod \"cinder-scheduler-0\" (UID: \"bf3cbbc4-d990-4d7d-9514-28beda8c084e\") " pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.453365 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 19:02:58 crc kubenswrapper[4770]: I0126 19:02:58.676723 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f47668778-9m4hm" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 26 19:02:59 crc kubenswrapper[4770]: I0126 19:02:59.004893 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 19:02:59 crc kubenswrapper[4770]: W0126 19:02:59.012863 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3cbbc4_d990_4d7d_9514_28beda8c084e.slice/crio-0148c75d749a96feb89730eb2510b7ab803cc145704171fb76d9ea76803a76ba WatchSource:0}: Error finding container 0148c75d749a96feb89730eb2510b7ab803cc145704171fb76d9ea76803a76ba: Status 404 returned error can't find the container with id 0148c75d749a96feb89730eb2510b7ab803cc145704171fb76d9ea76803a76ba Jan 26 19:02:59 crc kubenswrapper[4770]: I0126 19:02:59.063426 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf3cbbc4-d990-4d7d-9514-28beda8c084e","Type":"ContainerStarted","Data":"0148c75d749a96feb89730eb2510b7ab803cc145704171fb76d9ea76803a76ba"} Jan 26 19:02:59 crc kubenswrapper[4770]: I0126 19:02:59.174170 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 19:02:59 crc kubenswrapper[4770]: I0126 19:02:59.779854 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd0d21b-c128-4993-9b91-d41dea49e2b6" path="/var/lib/kubelet/pods/2bd0d21b-c128-4993-9b91-d41dea49e2b6/volumes" Jan 26 19:03:00 crc kubenswrapper[4770]: I0126 19:03:00.082727 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf3cbbc4-d990-4d7d-9514-28beda8c084e","Type":"ContainerStarted","Data":"a44c11fc9b98f75b9b8c2f335f4c36342e0965ae2d4a191e542b5a129b01c220"} Jan 26 19:03:00 crc kubenswrapper[4770]: I0126 19:03:00.318456 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:03:00 crc kubenswrapper[4770]: I0126 19:03:00.495386 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dfdbdd84d-x7fsz" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.094622 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bf3cbbc4-d990-4d7d-9514-28beda8c084e","Type":"ContainerStarted","Data":"2074b42675fd926bfefe26d2b6fce180c23eb2d730747ec0d9f13ffb4ebbff10"} Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.125466 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.125443387 podStartE2EDuration="3.125443387s" podCreationTimestamp="2026-01-26 19:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:01.116663996 +0000 UTC m=+1265.681570738" watchObservedRunningTime="2026-01-26 19:03:01.125443387 +0000 UTC m=+1265.690350119" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.239226 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-dfccf5f44-hghd8" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.447581 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.449212 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.454414 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.454649 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.454822 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d5hpr" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.462687 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.565656 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config-secret\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.565739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.565871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgz2\" (UniqueName: \"kubernetes.io/projected/db423aff-dffd-46a6-bd83-765c623ab77c-kube-api-access-wbgz2\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.565904 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.667835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgz2\" (UniqueName: \"kubernetes.io/projected/db423aff-dffd-46a6-bd83-765c623ab77c-kube-api-access-wbgz2\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.667890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.667961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config-secret\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.667977 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.669309 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.676924 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-openstack-config-secret\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.678301 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db423aff-dffd-46a6-bd83-765c623ab77c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.687380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgz2\" (UniqueName: \"kubernetes.io/projected/db423aff-dffd-46a6-bd83-765c623ab77c-kube-api-access-wbgz2\") pod \"openstackclient\" (UID: \"db423aff-dffd-46a6-bd83-765c623ab77c\") " pod="openstack/openstackclient" Jan 26 19:03:01 crc kubenswrapper[4770]: I0126 19:03:01.773541 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 19:03:02 crc kubenswrapper[4770]: W0126 19:03:02.232279 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb423aff_dffd_46a6_bd83_765c623ab77c.slice/crio-521981f8dd57940e0eb999df3eff4efff48868b7ced59a4883d602233b02b1f5 WatchSource:0}: Error finding container 521981f8dd57940e0eb999df3eff4efff48868b7ced59a4883d602233b02b1f5: Status 404 returned error can't find the container with id 521981f8dd57940e0eb999df3eff4efff48868b7ced59a4883d602233b02b1f5 Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.232365 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.335256 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.335327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.367125 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.388221 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.822357 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.822439 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:02 crc kubenswrapper[4770]: I0126 19:03:02.823246 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:03:02 crc kubenswrapper[4770]: E0126 19:03:02.823617 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ba7a2e1d-7c6b-4d89-ac01-5a93fb071444)\"" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.121023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db423aff-dffd-46a6-bd83-765c623ab77c","Type":"ContainerStarted","Data":"521981f8dd57940e0eb999df3eff4efff48868b7ced59a4883d602233b02b1f5"} Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.121254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.121270 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.296489 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.296543 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.342615 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.345619 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:03 crc kubenswrapper[4770]: I0126 19:03:03.454175 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 19:03:04 crc kubenswrapper[4770]: I0126 19:03:04.148453 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:04 crc kubenswrapper[4770]: I0126 19:03:04.148734 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.000427 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.001001 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.003059 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.165894 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.165927 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.793320 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:06 crc kubenswrapper[4770]: I0126 19:03:06.828641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.147755 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8688c56555-rsnrn"] Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.149622 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.152561 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.152561 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.153419 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.173789 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8688c56555-rsnrn"] Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-run-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296251 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-combined-ca-bundle\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296300 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-internal-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-public-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296414 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-log-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-config-data\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296583 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-etc-swift\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.296622 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskdr\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-kube-api-access-dskdr\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-log-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398340 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-config-data\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398373 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-etc-swift\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398390 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskdr\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-kube-api-access-dskdr\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398414 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-run-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-combined-ca-bundle\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398469 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-internal-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-public-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398689 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-log-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.398958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65d3af51-41f4-40e5-949e-a3eb611043bb-run-httpd\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.405546 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-public-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.405938 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-combined-ca-bundle\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.408890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-etc-swift\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.408972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-config-data\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.412339 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65d3af51-41f4-40e5-949e-a3eb611043bb-internal-tls-certs\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.430626 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskdr\" (UniqueName: \"kubernetes.io/projected/65d3af51-41f4-40e5-949e-a3eb611043bb-kube-api-access-dskdr\") pod \"swift-proxy-8688c56555-rsnrn\" (UID: \"65d3af51-41f4-40e5-949e-a3eb611043bb\") " pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.472869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.814038 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.814343 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-central-agent" containerID="cri-o://506924be1dae20ac361e6e39e015c44250f223bb39dae688710dc71327e2346d" gracePeriod=30 Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.815792 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="proxy-httpd" containerID="cri-o://8639123eccd6144288e4f6c20dfd9483cdcc95cf59c6b3caa5831f8a85d349c2" gracePeriod=30 Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.815856 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="sg-core" containerID="cri-o://6e2e5f796ed311d45c424e65ce768aea36774e47683d31c5132a9fdcfee26914" gracePeriod=30 Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.815889 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-notification-agent" containerID="cri-o://ad1f14d452d5187b0da4c43ba392d1284dabf016d2394d16a6575cc46e44cf64" gracePeriod=30 Jan 26 19:03:07 crc kubenswrapper[4770]: I0126 19:03:07.820050 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.103786 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8688c56555-rsnrn"] Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.235025 4770 generic.go:334] "Generic (PLEG): container finished" podID="344aff3a-e526-4210-b754-3adc82d36fdd" containerID="8639123eccd6144288e4f6c20dfd9483cdcc95cf59c6b3caa5831f8a85d349c2" exitCode=0 Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.235060 4770 generic.go:334] "Generic (PLEG): container finished" podID="344aff3a-e526-4210-b754-3adc82d36fdd" containerID="6e2e5f796ed311d45c424e65ce768aea36774e47683d31c5132a9fdcfee26914" exitCode=2 Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.235106 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerDied","Data":"8639123eccd6144288e4f6c20dfd9483cdcc95cf59c6b3caa5831f8a85d349c2"} Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.235151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerDied","Data":"6e2e5f796ed311d45c424e65ce768aea36774e47683d31c5132a9fdcfee26914"} Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.239748 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8688c56555-rsnrn" event={"ID":"65d3af51-41f4-40e5-949e-a3eb611043bb","Type":"ContainerStarted","Data":"2c6736a886b67b3a93c8597e316f08b9313fb236732ceaed616cfbf01b18ae73"} Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.585569 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.675910 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f47668778-9m4hm" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 26 19:03:08 crc kubenswrapper[4770]: I0126 19:03:08.676039 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.195852 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7pgdw"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.197670 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.209189 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7pgdw"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.300642 4770 generic.go:334] "Generic (PLEG): container finished" podID="344aff3a-e526-4210-b754-3adc82d36fdd" containerID="ad1f14d452d5187b0da4c43ba392d1284dabf016d2394d16a6575cc46e44cf64" exitCode=0 Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.300671 4770 generic.go:334] "Generic (PLEG): container finished" podID="344aff3a-e526-4210-b754-3adc82d36fdd" containerID="506924be1dae20ac361e6e39e015c44250f223bb39dae688710dc71327e2346d" exitCode=0 Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.300752 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerDied","Data":"ad1f14d452d5187b0da4c43ba392d1284dabf016d2394d16a6575cc46e44cf64"} Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.300780 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerDied","Data":"506924be1dae20ac361e6e39e015c44250f223bb39dae688710dc71327e2346d"} Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.303122 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-blkxf"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.304359 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.307230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8688c56555-rsnrn" event={"ID":"65d3af51-41f4-40e5-949e-a3eb611043bb","Type":"ContainerStarted","Data":"97270e5bdbab96c26051e6fc055b09d1837dbc0132f7c34f778dc9d89b03b834"} Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.315749 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-016a-account-create-update-spb7k"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.317074 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.318989 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.338929 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-blkxf"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.339262 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvhn\" (UniqueName: \"kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.339480 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.351905 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-016a-account-create-update-spb7k"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442312 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqtr\" (UniqueName: \"kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442614 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zvhn\" (UniqueName: \"kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.442635 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzww\" (UniqueName: \"kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.444324 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.470678 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zvhn\" (UniqueName: \"kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn\") pod \"nova-api-db-create-7pgdw\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.516749 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-t69pr"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.518002 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.528258 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e5ff-account-create-update-ptwhv"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.529974 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.533351 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.538452 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e5ff-account-create-update-ptwhv"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.545010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqtr\" (UniqueName: \"kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.545067 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.545103 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vzww\" (UniqueName: \"kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.546341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.547398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.547667 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.549057 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t69pr"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.562175 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vzww\" (UniqueName: \"kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww\") pod \"nova-api-016a-account-create-update-spb7k\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.572858 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.581301 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqtr\" (UniqueName: \"kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr\") pod \"nova-cell0-db-create-blkxf\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.642897 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.647835 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblgx\" (UniqueName: \"kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.647911 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.648039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.648185 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhp2\" (UniqueName: \"kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.654943 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.725465 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7d5e-account-create-update-th69h"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.726763 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.730322 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.744744 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d5e-account-create-update-th69h"] Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.749676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.749845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhp2\" (UniqueName: \"kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.749926 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblgx\" (UniqueName: \"kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.750017 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.750566 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.751045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.773858 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblgx\" (UniqueName: \"kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx\") pod \"nova-cell0-e5ff-account-create-update-ptwhv\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.785612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhp2\" (UniqueName: \"kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2\") pod \"nova-cell1-db-create-t69pr\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.851327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkqc\" (UniqueName: \"kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.851561 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.928924 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.938591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.953500 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.953559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkqc\" (UniqueName: \"kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.955128 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:09 crc kubenswrapper[4770]: I0126 19:03:09.975957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkqc\" (UniqueName: \"kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc\") pod \"nova-cell1-7d5e-account-create-update-th69h\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:10 crc kubenswrapper[4770]: I0126 19:03:10.062395 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:13 crc kubenswrapper[4770]: I0126 19:03:13.614738 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c5fff9c7-vsc8j" Jan 26 19:03:13 crc kubenswrapper[4770]: I0126 19:03:13.691558 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:03:13 crc kubenswrapper[4770]: I0126 19:03:13.691932 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fdc6454-kxn5b" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-httpd" containerID="cri-o://7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79" gracePeriod=30 Jan 26 19:03:13 crc kubenswrapper[4770]: I0126 19:03:13.692189 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fdc6454-kxn5b" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-api" containerID="cri-o://3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97" gracePeriod=30 Jan 26 19:03:14 crc kubenswrapper[4770]: I0126 19:03:14.359610 4770 generic.go:334] "Generic (PLEG): container finished" podID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerID="7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79" exitCode=0 Jan 26 19:03:14 crc kubenswrapper[4770]: I0126 19:03:14.359717 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerDied","Data":"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79"} Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.220236 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.408357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.408813 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.408965 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.409024 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbgn\" (UniqueName: \"kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.409124 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.409197 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.409265 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle\") pod \"344aff3a-e526-4210-b754-3adc82d36fdd\" (UID: \"344aff3a-e526-4210-b754-3adc82d36fdd\") " Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.416404 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn" (OuterVolumeSpecName: "kube-api-access-tgbgn") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "kube-api-access-tgbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.416827 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.417143 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.424666 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"344aff3a-e526-4210-b754-3adc82d36fdd","Type":"ContainerDied","Data":"0f12c613bc6c746fbd88d4d2149bea876f2724f3d59784dd26de6985a2ed33b8"} Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.424760 4770 scope.go:117] "RemoveContainer" containerID="8639123eccd6144288e4f6c20dfd9483cdcc95cf59c6b3caa5831f8a85d349c2" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.425135 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.435670 4770 generic.go:334] "Generic (PLEG): container finished" podID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerID="424e384591d9962673acb328847231755ae004f2ec839d227ef88b67b1f4fa9e" exitCode=137 Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.435740 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerDied","Data":"424e384591d9962673acb328847231755ae004f2ec839d227ef88b67b1f4fa9e"} Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.436731 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts" (OuterVolumeSpecName: "scripts") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.501324 4770 scope.go:117] "RemoveContainer" containerID="6e2e5f796ed311d45c424e65ce768aea36774e47683d31c5132a9fdcfee26914" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.513620 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.513646 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.513655 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/344aff3a-e526-4210-b754-3adc82d36fdd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.513663 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbgn\" (UniqueName: \"kubernetes.io/projected/344aff3a-e526-4210-b754-3adc82d36fdd-kube-api-access-tgbgn\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.543356 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d5e-account-create-update-th69h"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.589797 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.626290 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: W0126 19:03:16.684575 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod298782e0_4453_412a_b9c9_08a16d4317d6.slice/crio-74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec WatchSource:0}: Error finding container 74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec: Status 404 returned error can't find the container with id 74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.695516 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7pgdw"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.724621 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.734475 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.768733 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data" (OuterVolumeSpecName: "config-data") pod "344aff3a-e526-4210-b754-3adc82d36fdd" (UID: "344aff3a-e526-4210-b754-3adc82d36fdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.798518 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-blkxf"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.834853 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-016a-account-create-update-spb7k"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.839489 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/344aff3a-e526-4210-b754-3adc82d36fdd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.845936 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t69pr"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.944129 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e5ff-account-create-update-ptwhv"] Jan 26 19:03:16 crc kubenswrapper[4770]: I0126 19:03:16.974870 4770 scope.go:117] "RemoveContainer" containerID="ad1f14d452d5187b0da4c43ba392d1284dabf016d2394d16a6575cc46e44cf64" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.012134 4770 scope.go:117] "RemoveContainer" containerID="506924be1dae20ac361e6e39e015c44250f223bb39dae688710dc71327e2346d" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.069657 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.146360 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.151942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152087 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85wb\" (UniqueName: \"kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152320 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152516 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152607 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.152629 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts\") pod \"8adb68a1-1d86-4d72-93b1-0e8e499542af\" (UID: \"8adb68a1-1d86-4d72-93b1-0e8e499542af\") " Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.155542 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs" (OuterVolumeSpecName: "logs") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.161905 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.169752 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.177844 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb" (OuterVolumeSpecName: "kube-api-access-z85wb") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "kube-api-access-z85wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.181734 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182154 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-central-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182170 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-central-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182190 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="proxy-httpd" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182197 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="proxy-httpd" Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182205 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="sg-core" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182211 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="sg-core" Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182218 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-notification-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182224 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-notification-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182242 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182249 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" Jan 26 19:03:17 crc kubenswrapper[4770]: E0126 19:03:17.182258 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon-log" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182264 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon-log" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182449 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="proxy-httpd" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182459 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="sg-core" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182473 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon-log" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182483 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" containerName="horizon" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182492 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-central-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.182506 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" containerName="ceilometer-notification-agent" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.191386 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.191489 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.194453 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.194781 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrpjb\" (UniqueName: \"kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254564 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254742 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254800 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8adb68a1-1d86-4d72-93b1-0e8e499542af-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254814 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85wb\" (UniqueName: \"kubernetes.io/projected/8adb68a1-1d86-4d72-93b1-0e8e499542af-kube-api-access-z85wb\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.254824 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356313 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356393 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrpjb\" (UniqueName: \"kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356466 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356557 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.356615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.357394 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.357549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.365377 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.371398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.378318 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrpjb\" (UniqueName: \"kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.384258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.399853 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data\") pod \"ceilometer-0\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.444497 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data" (OuterVolumeSpecName: "config-data") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.444769 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7pgdw" event={"ID":"298782e0-4453-412a-b9c9-08a16d4317d6","Type":"ContainerStarted","Data":"2b9314a446cdd3e7b8d85b59cf7dd678d25ace9f96b03abd5300bf3073df7ae9"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.444807 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7pgdw" event={"ID":"298782e0-4453-412a-b9c9-08a16d4317d6","Type":"ContainerStarted","Data":"74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.447241 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-016a-account-create-update-spb7k" event={"ID":"cda29038-706d-42e7-9b63-f6c2a3313ff3","Type":"ContainerStarted","Data":"5959dce8bdc67c24e85004c3dc5fdf371effa985fca55e454c3963b96319c268"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.447271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-016a-account-create-update-spb7k" event={"ID":"cda29038-706d-42e7-9b63-f6c2a3313ff3","Type":"ContainerStarted","Data":"e080ccfbee97fc9b5ebd46fbd57bdc996de74468a2f3a57c339098633be3f221"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.461653 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" event={"ID":"dad84030-54e1-4ca4-a1a5-1c2bac22679b","Type":"ContainerStarted","Data":"02833b13c5b68cc38920db7af35343a1d348d9b49b73f24497d0611f0de5bcdc"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.466772 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.466848 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7pgdw" podStartSLOduration=8.466827466 podStartE2EDuration="8.466827466s" podCreationTimestamp="2026-01-26 19:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:17.460853332 +0000 UTC m=+1282.025760064" watchObservedRunningTime="2026-01-26 19:03:17.466827466 +0000 UTC m=+1282.031734198" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.468011 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" event={"ID":"602f406f-cd56-4b2b-8709-8114f7e1d34a","Type":"ContainerStarted","Data":"7b22c4f20b0d5eac0d1b6f2237e05c7984161037fc68a653ac02a964e7836828"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.468167 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" event={"ID":"602f406f-cd56-4b2b-8709-8114f7e1d34a","Type":"ContainerStarted","Data":"570a8bf3dad5c033db5e4d89288e125e00b064d868ff8fdb60befa446b2a2b74"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.482530 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts" (OuterVolumeSpecName: "scripts") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.483218 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-016a-account-create-update-spb7k" podStartSLOduration=8.483200605 podStartE2EDuration="8.483200605s" podCreationTimestamp="2026-01-26 19:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:17.482365913 +0000 UTC m=+1282.047272645" watchObservedRunningTime="2026-01-26 19:03:17.483200605 +0000 UTC m=+1282.048107337" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.488331 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db423aff-dffd-46a6-bd83-765c623ab77c","Type":"ContainerStarted","Data":"942c248d8450e7dcaa568c9cf4ea3fc01fcbc95f6638e69633f7094509965814"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.508688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t69pr" event={"ID":"269c34bc-d8f7-4c68-bbff-ff5ff812de92","Type":"ContainerStarted","Data":"17bfda71737f913902517af58328ac4019e1ef77155eb6c3e698bcc8238818ea"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.508733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t69pr" event={"ID":"269c34bc-d8f7-4c68-bbff-ff5ff812de92","Type":"ContainerStarted","Data":"2d7a52e39ded588764fbeae1a88e0878fe473cdf12bc1a136f01f9007b9b978f"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.510677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.515067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-blkxf" event={"ID":"53685f07-5a65-44be-b2e9-1eb713d3ab04","Type":"ContainerStarted","Data":"b8f17c371492c9862560f9ae618021be408583e78380988230865f5512fa96bc"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.526108 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" podStartSLOduration=8.526089513 podStartE2EDuration="8.526089513s" podCreationTimestamp="2026-01-26 19:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:17.496248234 +0000 UTC m=+1282.061154966" watchObservedRunningTime="2026-01-26 19:03:17.526089513 +0000 UTC m=+1282.090996245" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.535370 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.028130235 podStartE2EDuration="16.535354608s" podCreationTimestamp="2026-01-26 19:03:01 +0000 UTC" firstStartedPulling="2026-01-26 19:03:02.234032319 +0000 UTC m=+1266.798939051" lastFinishedPulling="2026-01-26 19:03:15.741256692 +0000 UTC m=+1280.306163424" observedRunningTime="2026-01-26 19:03:17.515165133 +0000 UTC m=+1282.080071865" watchObservedRunningTime="2026-01-26 19:03:17.535354608 +0000 UTC m=+1282.100261340" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.540364 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8688c56555-rsnrn" event={"ID":"65d3af51-41f4-40e5-949e-a3eb611043bb","Type":"ContainerStarted","Data":"7adc8b2c7904c57cb72b5df119b5ea2f65915e79d82adb52dfd84d507d76bd82"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.541902 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.541923 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.545779 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-t69pr" podStartSLOduration=8.545759113999999 podStartE2EDuration="8.545759114s" podCreationTimestamp="2026-01-26 19:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:17.534077213 +0000 UTC m=+1282.098983945" watchObservedRunningTime="2026-01-26 19:03:17.545759114 +0000 UTC m=+1282.110665846" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.560368 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8688c56555-rsnrn" podUID="65d3af51-41f4-40e5-949e-a3eb611043bb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.574876 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8adb68a1-1d86-4d72-93b1-0e8e499542af-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.587280 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f47668778-9m4hm" event={"ID":"8adb68a1-1d86-4d72-93b1-0e8e499542af","Type":"ContainerDied","Data":"606523be3b1a5051c8560d0dea73b8f9a87810b1791f5d86df063d3fad8cdfbc"} Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.588443 4770 scope.go:117] "RemoveContainer" containerID="00d8410891c3266be02e94a7492de06621996331830bc8b8d3cfe1d17da1f3fb" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.589834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.590580 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f47668778-9m4hm" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.592115 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8688c56555-rsnrn" podStartSLOduration=10.592075385 podStartE2EDuration="10.592075385s" podCreationTimestamp="2026-01-26 19:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:17.573819064 +0000 UTC m=+1282.138725796" watchObservedRunningTime="2026-01-26 19:03:17.592075385 +0000 UTC m=+1282.156982117" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.607950 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8adb68a1-1d86-4d72-93b1-0e8e499542af" (UID: "8adb68a1-1d86-4d72-93b1-0e8e499542af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.681246 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.681274 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8adb68a1-1d86-4d72-93b1-0e8e499542af-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.769377 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:03:17 crc kubenswrapper[4770]: I0126 19:03:17.790785 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="344aff3a-e526-4210-b754-3adc82d36fdd" path="/var/lib/kubelet/pods/344aff3a-e526-4210-b754-3adc82d36fdd/volumes" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.120435 4770 scope.go:117] "RemoveContainer" containerID="424e384591d9962673acb328847231755ae004f2ec839d227ef88b67b1f4fa9e" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.210674 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:18 crc kubenswrapper[4770]: W0126 19:03:18.235238 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04b53aaa_102c_4132_8965_51019fa30104.slice/crio-6d9af1e761a9cf84dc72abe2dca60edf611496f86d7682d034aea52d0c934d99 WatchSource:0}: Error finding container 6d9af1e761a9cf84dc72abe2dca60edf611496f86d7682d034aea52d0c934d99: Status 404 returned error can't find the container with id 6d9af1e761a9cf84dc72abe2dca60edf611496f86d7682d034aea52d0c934d99 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.294279 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.304173 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.314340 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f47668778-9m4hm"] Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399389 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399427 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399501 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399544 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399563 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st7sg\" (UniqueName: \"kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.399630 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data\") pod \"eb633454-1a38-4280-a3d4-8825f169e03e\" (UID: \"eb633454-1a38-4280-a3d4-8825f169e03e\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.407432 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.408490 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs" (OuterVolumeSpecName: "logs") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.417058 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.418177 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts" (OuterVolumeSpecName: "scripts") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.420551 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.476283 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg" (OuterVolumeSpecName: "kube-api-access-st7sg") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "kube-api-access-st7sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.476815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505256 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd\") pod \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505321 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config\") pod \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505457 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs\") pod \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505535 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config\") pod \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505601 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle\") pod \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\" (UID: \"10eb4373-dea4-4b6f-9c1d-d1c49352699d\") " Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.505991 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.506003 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st7sg\" (UniqueName: \"kubernetes.io/projected/eb633454-1a38-4280-a3d4-8825f169e03e-kube-api-access-st7sg\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.506013 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb633454-1a38-4280-a3d4-8825f169e03e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.506021 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.506029 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.506037 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb633454-1a38-4280-a3d4-8825f169e03e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.541499 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.543085 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data" (OuterVolumeSpecName: "config-data") pod "eb633454-1a38-4280-a3d4-8825f169e03e" (UID: "eb633454-1a38-4280-a3d4-8825f169e03e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.543189 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd" (OuterVolumeSpecName: "kube-api-access-cq4wd") pod "10eb4373-dea4-4b6f-9c1d-d1c49352699d" (UID: "10eb4373-dea4-4b6f-9c1d-d1c49352699d"). InnerVolumeSpecName "kube-api-access-cq4wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.543236 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "10eb4373-dea4-4b6f-9c1d-d1c49352699d" (UID: "10eb4373-dea4-4b6f-9c1d-d1c49352699d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.610585 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq4wd\" (UniqueName: \"kubernetes.io/projected/10eb4373-dea4-4b6f-9c1d-d1c49352699d-kube-api-access-cq4wd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.610613 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.610622 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb633454-1a38-4280-a3d4-8825f169e03e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.611583 4770 generic.go:334] "Generic (PLEG): container finished" podID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerID="3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.611638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerDied","Data":"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.611663 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fdc6454-kxn5b" event={"ID":"10eb4373-dea4-4b6f-9c1d-d1c49352699d","Type":"ContainerDied","Data":"878b584b6d82e40268b7328bcd1750de9e387945f0448164e1d8bab2d9e25aa3"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.611781 4770 scope.go:117] "RemoveContainer" containerID="7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.613175 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fdc6454-kxn5b" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.618478 4770 generic.go:334] "Generic (PLEG): container finished" podID="298782e0-4453-412a-b9c9-08a16d4317d6" containerID="2b9314a446cdd3e7b8d85b59cf7dd678d25ace9f96b03abd5300bf3073df7ae9" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.618572 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7pgdw" event={"ID":"298782e0-4453-412a-b9c9-08a16d4317d6","Type":"ContainerDied","Data":"2b9314a446cdd3e7b8d85b59cf7dd678d25ace9f96b03abd5300bf3073df7ae9"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.619248 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10eb4373-dea4-4b6f-9c1d-d1c49352699d" (UID: "10eb4373-dea4-4b6f-9c1d-d1c49352699d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.620239 4770 generic.go:334] "Generic (PLEG): container finished" podID="602f406f-cd56-4b2b-8709-8114f7e1d34a" containerID="7b22c4f20b0d5eac0d1b6f2237e05c7984161037fc68a653ac02a964e7836828" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.620337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" event={"ID":"602f406f-cd56-4b2b-8709-8114f7e1d34a","Type":"ContainerDied","Data":"7b22c4f20b0d5eac0d1b6f2237e05c7984161037fc68a653ac02a964e7836828"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.651447 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config" (OuterVolumeSpecName: "config") pod "10eb4373-dea4-4b6f-9c1d-d1c49352699d" (UID: "10eb4373-dea4-4b6f-9c1d-d1c49352699d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.659182 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "10eb4373-dea4-4b6f-9c1d-d1c49352699d" (UID: "10eb4373-dea4-4b6f-9c1d-d1c49352699d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.678236 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" event={"ID":"dad84030-54e1-4ca4-a1a5-1c2bac22679b","Type":"ContainerStarted","Data":"0b7b6ee028f0271cf8cfaa70cc68a0adeef7ec1936021f014b9dac8044ec552d"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.703535 4770 generic.go:334] "Generic (PLEG): container finished" podID="eb633454-1a38-4280-a3d4-8825f169e03e" containerID="b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac" exitCode=137 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.703664 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerDied","Data":"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.703726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb633454-1a38-4280-a3d4-8825f169e03e","Type":"ContainerDied","Data":"f365287d8148de3246385cbdddcbd3870394c468c61d914563807b2309002881"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.703829 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.704920 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" podStartSLOduration=9.704899683 podStartE2EDuration="9.704899683s" podCreationTimestamp="2026-01-26 19:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:18.69716078 +0000 UTC m=+1283.262067512" watchObservedRunningTime="2026-01-26 19:03:18.704899683 +0000 UTC m=+1283.269806415" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.713484 4770 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.713516 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.713527 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10eb4373-dea4-4b6f-9c1d-d1c49352699d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.718236 4770 generic.go:334] "Generic (PLEG): container finished" podID="269c34bc-d8f7-4c68-bbff-ff5ff812de92" containerID="17bfda71737f913902517af58328ac4019e1ef77155eb6c3e698bcc8238818ea" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.718314 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t69pr" event={"ID":"269c34bc-d8f7-4c68-bbff-ff5ff812de92","Type":"ContainerDied","Data":"17bfda71737f913902517af58328ac4019e1ef77155eb6c3e698bcc8238818ea"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.721956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerStarted","Data":"6d9af1e761a9cf84dc72abe2dca60edf611496f86d7682d034aea52d0c934d99"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.724055 4770 generic.go:334] "Generic (PLEG): container finished" podID="cda29038-706d-42e7-9b63-f6c2a3313ff3" containerID="5959dce8bdc67c24e85004c3dc5fdf371effa985fca55e454c3963b96319c268" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.724108 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-016a-account-create-update-spb7k" event={"ID":"cda29038-706d-42e7-9b63-f6c2a3313ff3","Type":"ContainerDied","Data":"5959dce8bdc67c24e85004c3dc5fdf371effa985fca55e454c3963b96319c268"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.727882 4770 generic.go:334] "Generic (PLEG): container finished" podID="53685f07-5a65-44be-b2e9-1eb713d3ab04" containerID="01e777708fcfc3c4293f9b2c3e1c9617e79f40232f9110f5bef098ade1ff8fff" exitCode=0 Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.729931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-blkxf" event={"ID":"53685f07-5a65-44be-b2e9-1eb713d3ab04","Type":"ContainerDied","Data":"01e777708fcfc3c4293f9b2c3e1c9617e79f40232f9110f5bef098ade1ff8fff"} Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.746857 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.933201 4770 scope.go:117] "RemoveContainer" containerID="3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.973026 4770 scope.go:117] "RemoveContainer" containerID="7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79" Jan 26 19:03:18 crc kubenswrapper[4770]: E0126 19:03:18.977110 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79\": container with ID starting with 7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79 not found: ID does not exist" containerID="7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.977234 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79"} err="failed to get container status \"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79\": rpc error: code = NotFound desc = could not find container \"7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79\": container with ID starting with 7e22d10dbef4a2354abe2533056535f7b57ca7970d3338d2405ec69df63f3f79 not found: ID does not exist" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.977374 4770 scope.go:117] "RemoveContainer" containerID="3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97" Jan 26 19:03:18 crc kubenswrapper[4770]: E0126 19:03:18.977822 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97\": container with ID starting with 3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97 not found: ID does not exist" containerID="3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.977895 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97"} err="failed to get container status \"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97\": rpc error: code = NotFound desc = could not find container \"3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97\": container with ID starting with 3475ccc250d5bd35ad5c99dc575644e86743d72136defce8317e70d73f981a97 not found: ID does not exist" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.977919 4770 scope.go:117] "RemoveContainer" containerID="b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac" Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.978348 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:03:18 crc kubenswrapper[4770]: I0126 19:03:18.996661 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.007980 4770 scope.go:117] "RemoveContainer" containerID="79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.007988 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.035858 4770 scope.go:117] "RemoveContainer" containerID="b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.035965 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74fdc6454-kxn5b"] Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.039244 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac\": container with ID starting with b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac not found: ID does not exist" containerID="b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.039282 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac"} err="failed to get container status \"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac\": rpc error: code = NotFound desc = could not find container \"b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac\": container with ID starting with b568ba561d36a915fe17fd832c4a291d6365ab128bc5e3e7544ca38cbacb8bac not found: ID does not exist" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.039488 4770 scope.go:117] "RemoveContainer" containerID="79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7" Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.039888 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7\": container with ID starting with 79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7 not found: ID does not exist" containerID="79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.039912 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7"} err="failed to get container status \"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7\": rpc error: code = NotFound desc = could not find container \"79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7\": container with ID starting with 79529c2e6416b04621535a4203fd602fa0c33dc0551b4495be6d0cf3d6f5cbd7 not found: ID does not exist" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.047563 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.048211 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-api" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048237 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-api" Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.048260 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048270 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api" Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.048286 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api-log" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048294 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api-log" Jan 26 19:03:19 crc kubenswrapper[4770]: E0126 19:03:19.048330 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-httpd" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048340 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-httpd" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048581 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api-log" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048617 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-httpd" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048644 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" containerName="neutron-api" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.048661 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" containerName="cinder-api" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.050416 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.058062 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.058314 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.058910 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.061447 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.229685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-scripts\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.229792 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc30e4a5-148d-4296-b220-518e972b4f3b-logs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.229918 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.229962 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q5p7\" (UniqueName: \"kubernetes.io/projected/cc30e4a5-148d-4296-b220-518e972b4f3b-kube-api-access-5q5p7\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.230010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.230067 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.230244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.230301 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc30e4a5-148d-4296-b220-518e972b4f3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.230360 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.331771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.331844 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc30e4a5-148d-4296-b220-518e972b4f3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.331886 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.331969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-scripts\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.331985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc30e4a5-148d-4296-b220-518e972b4f3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc30e4a5-148d-4296-b220-518e972b4f3b-logs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332064 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q5p7\" (UniqueName: \"kubernetes.io/projected/cc30e4a5-148d-4296-b220-518e972b4f3b-kube-api-access-5q5p7\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332116 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332145 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.332417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc30e4a5-148d-4296-b220-518e972b4f3b-logs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.336207 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.336397 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.336482 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.336599 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.336666 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-config-data\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.337236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc30e4a5-148d-4296-b220-518e972b4f3b-scripts\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.352995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q5p7\" (UniqueName: \"kubernetes.io/projected/cc30e4a5-148d-4296-b220-518e972b4f3b-kube-api-access-5q5p7\") pod \"cinder-api-0\" (UID: \"cc30e4a5-148d-4296-b220-518e972b4f3b\") " pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.380121 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.753963 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerStarted","Data":"0206bfd60c4ea84a5fb681b6836080b873679bf774e038b6df88e304b4cf1c0e"} Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.754242 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerStarted","Data":"fd52118e337409a04dab71b59f2917ed320ae3635c0c8854ecf9ab0d38b17dfc"} Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.756403 4770 generic.go:334] "Generic (PLEG): container finished" podID="dad84030-54e1-4ca4-a1a5-1c2bac22679b" containerID="0b7b6ee028f0271cf8cfaa70cc68a0adeef7ec1936021f014b9dac8044ec552d" exitCode=0 Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.756495 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" event={"ID":"dad84030-54e1-4ca4-a1a5-1c2bac22679b","Type":"ContainerDied","Data":"0b7b6ee028f0271cf8cfaa70cc68a0adeef7ec1936021f014b9dac8044ec552d"} Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.800224 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10eb4373-dea4-4b6f-9c1d-d1c49352699d" path="/var/lib/kubelet/pods/10eb4373-dea4-4b6f-9c1d-d1c49352699d/volumes" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.801284 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8adb68a1-1d86-4d72-93b1-0e8e499542af" path="/var/lib/kubelet/pods/8adb68a1-1d86-4d72-93b1-0e8e499542af/volumes" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.802063 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb633454-1a38-4280-a3d4-8825f169e03e" path="/var/lib/kubelet/pods/eb633454-1a38-4280-a3d4-8825f169e03e/volumes" Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.804065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerStarted","Data":"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f"} Jan 26 19:03:19 crc kubenswrapper[4770]: I0126 19:03:19.971070 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 19:03:20 crc kubenswrapper[4770]: W0126 19:03:20.015059 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc30e4a5_148d_4296_b220_518e972b4f3b.slice/crio-dc3b69f092012efd73efc29a9eb55c130e546a59c28da490e6a607c97e1eea37 WatchSource:0}: Error finding container dc3b69f092012efd73efc29a9eb55c130e546a59c28da490e6a607c97e1eea37: Status 404 returned error can't find the container with id dc3b69f092012efd73efc29a9eb55c130e546a59c28da490e6a607c97e1eea37 Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.176273 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.327910 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.356062 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.358778 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hkqc\" (UniqueName: \"kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc\") pod \"602f406f-cd56-4b2b-8709-8114f7e1d34a\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.359032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts\") pod \"602f406f-cd56-4b2b-8709-8114f7e1d34a\" (UID: \"602f406f-cd56-4b2b-8709-8114f7e1d34a\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.359954 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "602f406f-cd56-4b2b-8709-8114f7e1d34a" (UID: "602f406f-cd56-4b2b-8709-8114f7e1d34a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.375102 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.378043 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc" (OuterVolumeSpecName: "kube-api-access-4hkqc") pod "602f406f-cd56-4b2b-8709-8114f7e1d34a" (UID: "602f406f-cd56-4b2b-8709-8114f7e1d34a"). InnerVolumeSpecName "kube-api-access-4hkqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.460482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts\") pod \"298782e0-4453-412a-b9c9-08a16d4317d6\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.460549 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zvhn\" (UniqueName: \"kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn\") pod \"298782e0-4453-412a-b9c9-08a16d4317d6\" (UID: \"298782e0-4453-412a-b9c9-08a16d4317d6\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.460615 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts\") pod \"cda29038-706d-42e7-9b63-f6c2a3313ff3\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.460637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vzww\" (UniqueName: \"kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww\") pod \"cda29038-706d-42e7-9b63-f6c2a3313ff3\" (UID: \"cda29038-706d-42e7-9b63-f6c2a3313ff3\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.461048 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hkqc\" (UniqueName: \"kubernetes.io/projected/602f406f-cd56-4b2b-8709-8114f7e1d34a-kube-api-access-4hkqc\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.461063 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602f406f-cd56-4b2b-8709-8114f7e1d34a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.462211 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "298782e0-4453-412a-b9c9-08a16d4317d6" (UID: "298782e0-4453-412a-b9c9-08a16d4317d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.465853 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cda29038-706d-42e7-9b63-f6c2a3313ff3" (UID: "cda29038-706d-42e7-9b63-f6c2a3313ff3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.469924 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn" (OuterVolumeSpecName: "kube-api-access-6zvhn") pod "298782e0-4453-412a-b9c9-08a16d4317d6" (UID: "298782e0-4453-412a-b9c9-08a16d4317d6"). InnerVolumeSpecName "kube-api-access-6zvhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.470023 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww" (OuterVolumeSpecName: "kube-api-access-7vzww") pod "cda29038-706d-42e7-9b63-f6c2a3313ff3" (UID: "cda29038-706d-42e7-9b63-f6c2a3313ff3"). InnerVolumeSpecName "kube-api-access-7vzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563214 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqtr\" (UniqueName: \"kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr\") pod \"53685f07-5a65-44be-b2e9-1eb713d3ab04\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563328 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts\") pod \"53685f07-5a65-44be-b2e9-1eb713d3ab04\" (UID: \"53685f07-5a65-44be-b2e9-1eb713d3ab04\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563958 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zvhn\" (UniqueName: \"kubernetes.io/projected/298782e0-4453-412a-b9c9-08a16d4317d6-kube-api-access-6zvhn\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563975 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cda29038-706d-42e7-9b63-f6c2a3313ff3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563984 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vzww\" (UniqueName: \"kubernetes.io/projected/cda29038-706d-42e7-9b63-f6c2a3313ff3-kube-api-access-7vzww\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.563992 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/298782e0-4453-412a-b9c9-08a16d4317d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.564491 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53685f07-5a65-44be-b2e9-1eb713d3ab04" (UID: "53685f07-5a65-44be-b2e9-1eb713d3ab04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.569415 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr" (OuterVolumeSpecName: "kube-api-access-ztqtr") pod "53685f07-5a65-44be-b2e9-1eb713d3ab04" (UID: "53685f07-5a65-44be-b2e9-1eb713d3ab04"). InnerVolumeSpecName "kube-api-access-ztqtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.668917 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqtr\" (UniqueName: \"kubernetes.io/projected/53685f07-5a65-44be-b2e9-1eb713d3ab04-kube-api-access-ztqtr\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.668949 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53685f07-5a65-44be-b2e9-1eb713d3ab04-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.678474 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.838972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-016a-account-create-update-spb7k" event={"ID":"cda29038-706d-42e7-9b63-f6c2a3313ff3","Type":"ContainerDied","Data":"e080ccfbee97fc9b5ebd46fbd57bdc996de74468a2f3a57c339098633be3f221"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.839010 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e080ccfbee97fc9b5ebd46fbd57bdc996de74468a2f3a57c339098633be3f221" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.839013 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-016a-account-create-update-spb7k" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.840581 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-blkxf" event={"ID":"53685f07-5a65-44be-b2e9-1eb713d3ab04","Type":"ContainerDied","Data":"b8f17c371492c9862560f9ae618021be408583e78380988230865f5512fa96bc"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.840604 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-blkxf" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.840614 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f17c371492c9862560f9ae618021be408583e78380988230865f5512fa96bc" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.859088 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7pgdw" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.859093 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7pgdw" event={"ID":"298782e0-4453-412a-b9c9-08a16d4317d6","Type":"ContainerDied","Data":"74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.859137 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c19264253f0f063aff8c3a8bf709a9732dec28a8aef8d7a8d52645c526ebec" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.867533 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t69pr" event={"ID":"269c34bc-d8f7-4c68-bbff-ff5ff812de92","Type":"ContainerDied","Data":"2d7a52e39ded588764fbeae1a88e0878fe473cdf12bc1a136f01f9007b9b978f"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.867590 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d7a52e39ded588764fbeae1a88e0878fe473cdf12bc1a136f01f9007b9b978f" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.867672 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t69pr" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.872550 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts\") pod \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.872873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhp2\" (UniqueName: \"kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2\") pod \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\" (UID: \"269c34bc-d8f7-4c68-bbff-ff5ff812de92\") " Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.874176 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "269c34bc-d8f7-4c68-bbff-ff5ff812de92" (UID: "269c34bc-d8f7-4c68-bbff-ff5ff812de92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.878186 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2" (OuterVolumeSpecName: "kube-api-access-cdhp2") pod "269c34bc-d8f7-4c68-bbff-ff5ff812de92" (UID: "269c34bc-d8f7-4c68-bbff-ff5ff812de92"). InnerVolumeSpecName "kube-api-access-cdhp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.880622 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.881540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d5e-account-create-update-th69h" event={"ID":"602f406f-cd56-4b2b-8709-8114f7e1d34a","Type":"ContainerDied","Data":"570a8bf3dad5c033db5e4d89288e125e00b064d868ff8fdb60befa446b2a2b74"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.881608 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570a8bf3dad5c033db5e4d89288e125e00b064d868ff8fdb60befa446b2a2b74" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.883460 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc30e4a5-148d-4296-b220-518e972b4f3b","Type":"ContainerStarted","Data":"dc3b69f092012efd73efc29a9eb55c130e546a59c28da490e6a607c97e1eea37"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.887092 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerStarted","Data":"a8ede196ca4b0a7c9b4725ff8479801f69cf6f38b40d179442d762f28b42ca62"} Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.975726 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhp2\" (UniqueName: \"kubernetes.io/projected/269c34bc-d8f7-4c68-bbff-ff5ff812de92-kube-api-access-cdhp2\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:20 crc kubenswrapper[4770]: I0126 19:03:20.975770 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269c34bc-d8f7-4c68-bbff-ff5ff812de92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.248565 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.393568 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rblgx\" (UniqueName: \"kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx\") pod \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.393761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts\") pod \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\" (UID: \"dad84030-54e1-4ca4-a1a5-1c2bac22679b\") " Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.395175 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dad84030-54e1-4ca4-a1a5-1c2bac22679b" (UID: "dad84030-54e1-4ca4-a1a5-1c2bac22679b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.411410 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx" (OuterVolumeSpecName: "kube-api-access-rblgx") pod "dad84030-54e1-4ca4-a1a5-1c2bac22679b" (UID: "dad84030-54e1-4ca4-a1a5-1c2bac22679b"). InnerVolumeSpecName "kube-api-access-rblgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.497953 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rblgx\" (UniqueName: \"kubernetes.io/projected/dad84030-54e1-4ca4-a1a5-1c2bac22679b-kube-api-access-rblgx\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.498270 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad84030-54e1-4ca4-a1a5-1c2bac22679b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.899596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc30e4a5-148d-4296-b220-518e972b4f3b","Type":"ContainerStarted","Data":"9dbe04980b71a4a4e4763c16d30718cb4e382d5c5790b00ea6f08f4f4ffe578d"} Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.899633 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc30e4a5-148d-4296-b220-518e972b4f3b","Type":"ContainerStarted","Data":"044015b35461d864e45f90bb8c5649d3d83bf61918541bc3af1e87855b118856"} Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.899769 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerStarted","Data":"92a4d9188e1ea7e30a39b265043e3b0bf9ce25bcb50df95e7c6d113cfdfb99a1"} Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902633 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-central-agent" containerID="cri-o://fd52118e337409a04dab71b59f2917ed320ae3635c0c8854ecf9ab0d38b17dfc" gracePeriod=30 Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902661 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="sg-core" containerID="cri-o://a8ede196ca4b0a7c9b4725ff8479801f69cf6f38b40d179442d762f28b42ca62" gracePeriod=30 Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902723 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902739 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="proxy-httpd" containerID="cri-o://92a4d9188e1ea7e30a39b265043e3b0bf9ce25bcb50df95e7c6d113cfdfb99a1" gracePeriod=30 Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.902759 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-notification-agent" containerID="cri-o://0206bfd60c4ea84a5fb681b6836080b873679bf774e038b6df88e304b4cf1c0e" gracePeriod=30 Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.908120 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" event={"ID":"dad84030-54e1-4ca4-a1a5-1c2bac22679b","Type":"ContainerDied","Data":"02833b13c5b68cc38920db7af35343a1d348d9b49b73f24497d0611f0de5bcdc"} Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.908164 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02833b13c5b68cc38920db7af35343a1d348d9b49b73f24497d0611f0de5bcdc" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.908234 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e5ff-account-create-update-ptwhv" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.924628 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.924608525 podStartE2EDuration="3.924608525s" podCreationTimestamp="2026-01-26 19:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:21.920937154 +0000 UTC m=+1286.485843906" watchObservedRunningTime="2026-01-26 19:03:21.924608525 +0000 UTC m=+1286.489515247" Jan 26 19:03:21 crc kubenswrapper[4770]: I0126 19:03:21.950741 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8553861249999999 podStartE2EDuration="4.950717532s" podCreationTimestamp="2026-01-26 19:03:17 +0000 UTC" firstStartedPulling="2026-01-26 19:03:18.246189247 +0000 UTC m=+1282.811095979" lastFinishedPulling="2026-01-26 19:03:21.341520654 +0000 UTC m=+1285.906427386" observedRunningTime="2026-01-26 19:03:21.939997337 +0000 UTC m=+1286.504904079" watchObservedRunningTime="2026-01-26 19:03:21.950717532 +0000 UTC m=+1286.515624274" Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.481957 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8688c56555-rsnrn" Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.822722 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.851340 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.918868 4770 generic.go:334] "Generic (PLEG): container finished" podID="04b53aaa-102c-4132-8965-51019fa30104" containerID="92a4d9188e1ea7e30a39b265043e3b0bf9ce25bcb50df95e7c6d113cfdfb99a1" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.918897 4770 generic.go:334] "Generic (PLEG): container finished" podID="04b53aaa-102c-4132-8965-51019fa30104" containerID="a8ede196ca4b0a7c9b4725ff8479801f69cf6f38b40d179442d762f28b42ca62" exitCode=2 Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.918909 4770 generic.go:334] "Generic (PLEG): container finished" podID="04b53aaa-102c-4132-8965-51019fa30104" containerID="0206bfd60c4ea84a5fb681b6836080b873679bf774e038b6df88e304b4cf1c0e" exitCode=0 Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.919820 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerDied","Data":"92a4d9188e1ea7e30a39b265043e3b0bf9ce25bcb50df95e7c6d113cfdfb99a1"} Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.919885 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.919902 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerDied","Data":"a8ede196ca4b0a7c9b4725ff8479801f69cf6f38b40d179442d762f28b42ca62"} Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.919913 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerDied","Data":"0206bfd60c4ea84a5fb681b6836080b873679bf774e038b6df88e304b4cf1c0e"} Jan 26 19:03:22 crc kubenswrapper[4770]: I0126 19:03:22.956739 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830024 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2gtrl"] Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830692 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda29038-706d-42e7-9b63-f6c2a3313ff3" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830732 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda29038-706d-42e7-9b63-f6c2a3313ff3" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830749 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269c34bc-d8f7-4c68-bbff-ff5ff812de92" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830755 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="269c34bc-d8f7-4c68-bbff-ff5ff812de92" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830765 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53685f07-5a65-44be-b2e9-1eb713d3ab04" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830772 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="53685f07-5a65-44be-b2e9-1eb713d3ab04" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830788 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298782e0-4453-412a-b9c9-08a16d4317d6" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830796 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="298782e0-4453-412a-b9c9-08a16d4317d6" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830810 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad84030-54e1-4ca4-a1a5-1c2bac22679b" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830818 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad84030-54e1-4ca4-a1a5-1c2bac22679b" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: E0126 19:03:24.830844 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f406f-cd56-4b2b-8709-8114f7e1d34a" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.830851 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f406f-cd56-4b2b-8709-8114f7e1d34a" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831042 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="53685f07-5a65-44be-b2e9-1eb713d3ab04" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831057 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="269c34bc-d8f7-4c68-bbff-ff5ff812de92" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831066 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="298782e0-4453-412a-b9c9-08a16d4317d6" containerName="mariadb-database-create" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831079 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda29038-706d-42e7-9b63-f6c2a3313ff3" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831092 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad84030-54e1-4ca4-a1a5-1c2bac22679b" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831101 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="602f406f-cd56-4b2b-8709-8114f7e1d34a" containerName="mariadb-account-create-update" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.831785 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.834585 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-krbbg" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.834818 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.840211 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.845535 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2gtrl"] Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.874372 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.876152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.876304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtbp\" (UniqueName: \"kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.876413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.978475 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.978532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbtbp\" (UniqueName: \"kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.978559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.978665 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.984332 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.984680 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.987631 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:24 crc kubenswrapper[4770]: I0126 19:03:24.994212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbtbp\" (UniqueName: \"kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp\") pod \"nova-cell0-conductor-db-sync-2gtrl\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:25 crc kubenswrapper[4770]: I0126 19:03:25.151432 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:25 crc kubenswrapper[4770]: I0126 19:03:25.608689 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2gtrl"] Jan 26 19:03:25 crc kubenswrapper[4770]: W0126 19:03:25.614514 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c85e81_cbf1_4b3e_9012_f8f10e74021e.slice/crio-196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc WatchSource:0}: Error finding container 196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc: Status 404 returned error can't find the container with id 196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc Jan 26 19:03:25 crc kubenswrapper[4770]: I0126 19:03:25.948439 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" event={"ID":"c6c85e81-cbf1-4b3e-9012-f8f10e74021e","Type":"ContainerStarted","Data":"196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc"} Jan 26 19:03:26 crc kubenswrapper[4770]: I0126 19:03:26.961498 4770 generic.go:334] "Generic (PLEG): container finished" podID="04b53aaa-102c-4132-8965-51019fa30104" containerID="fd52118e337409a04dab71b59f2917ed320ae3635c0c8854ecf9ab0d38b17dfc" exitCode=0 Jan 26 19:03:26 crc kubenswrapper[4770]: I0126 19:03:26.962105 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerDied","Data":"fd52118e337409a04dab71b59f2917ed320ae3635c0c8854ecf9ab0d38b17dfc"} Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.095163 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.122708 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.122974 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrpjb\" (UniqueName: \"kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.143883 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb" (OuterVolumeSpecName: "kube-api-access-rrpjb") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "kube-api-access-rrpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.220846 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.225302 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.225385 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.225409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.225581 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.225671 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml\") pod \"04b53aaa-102c-4132-8965-51019fa30104\" (UID: \"04b53aaa-102c-4132-8965-51019fa30104\") " Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226567 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226790 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226821 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrpjb\" (UniqueName: \"kubernetes.io/projected/04b53aaa-102c-4132-8965-51019fa30104-kube-api-access-rrpjb\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226835 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.226846 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04b53aaa-102c-4132-8965-51019fa30104-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.229045 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts" (OuterVolumeSpecName: "scripts") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.250931 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.324368 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data" (OuterVolumeSpecName: "config-data") pod "04b53aaa-102c-4132-8965-51019fa30104" (UID: "04b53aaa-102c-4132-8965-51019fa30104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.328025 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.328060 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.328073 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b53aaa-102c-4132-8965-51019fa30104-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.980259 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04b53aaa-102c-4132-8965-51019fa30104","Type":"ContainerDied","Data":"6d9af1e761a9cf84dc72abe2dca60edf611496f86d7682d034aea52d0c934d99"} Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.980737 4770 scope.go:117] "RemoveContainer" containerID="92a4d9188e1ea7e30a39b265043e3b0bf9ce25bcb50df95e7c6d113cfdfb99a1" Jan 26 19:03:27 crc kubenswrapper[4770]: I0126 19:03:27.980446 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.020367 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.032456 4770 scope.go:117] "RemoveContainer" containerID="a8ede196ca4b0a7c9b4725ff8479801f69cf6f38b40d179442d762f28b42ca62" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.042237 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.056677 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:28 crc kubenswrapper[4770]: E0126 19:03:28.057054 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-central-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057070 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-central-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: E0126 19:03:28.057088 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="sg-core" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057094 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="sg-core" Jan 26 19:03:28 crc kubenswrapper[4770]: E0126 19:03:28.057109 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="proxy-httpd" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057115 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="proxy-httpd" Jan 26 19:03:28 crc kubenswrapper[4770]: E0126 19:03:28.057141 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-notification-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057147 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-notification-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057322 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-central-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057340 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="ceilometer-notification-agent" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057356 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="sg-core" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057376 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b53aaa-102c-4132-8965-51019fa30104" containerName="proxy-httpd" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.057843 4770 scope.go:117] "RemoveContainer" containerID="0206bfd60c4ea84a5fb681b6836080b873679bf774e038b6df88e304b4cf1c0e" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.059184 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.064357 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.064679 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.069033 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.090384 4770 scope.go:117] "RemoveContainer" containerID="fd52118e337409a04dab71b59f2917ed320ae3635c0c8854ecf9ab0d38b17dfc" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252174 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252228 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252290 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252357 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.252684 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk27j\" (UniqueName: \"kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354200 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354288 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.354763 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.355334 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.355408 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk27j\" (UniqueName: \"kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.356113 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.360608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.360659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.361570 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.371063 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.377091 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk27j\" (UniqueName: \"kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j\") pod \"ceilometer-0\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " pod="openstack/ceilometer-0" Jan 26 19:03:28 crc kubenswrapper[4770]: I0126 19:03:28.676175 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:29 crc kubenswrapper[4770]: I0126 19:03:29.718406 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:29 crc kubenswrapper[4770]: I0126 19:03:29.786489 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b53aaa-102c-4132-8965-51019fa30104" path="/var/lib/kubelet/pods/04b53aaa-102c-4132-8965-51019fa30104/volumes" Jan 26 19:03:30 crc kubenswrapper[4770]: I0126 19:03:30.906548 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:30 crc kubenswrapper[4770]: I0126 19:03:30.907081 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-log" containerID="cri-o://269364b51e44a84af09018313383df30b0914867186361e7273ef6697cc6aad7" gracePeriod=30 Jan 26 19:03:30 crc kubenswrapper[4770]: I0126 19:03:30.907560 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-httpd" containerID="cri-o://4705719a97fcaaa1fbf1180abe94cffd0cc6f7b492dcbeacdf5f1a4d4a4363d2" gracePeriod=30 Jan 26 19:03:31 crc kubenswrapper[4770]: I0126 19:03:31.038612 4770 generic.go:334] "Generic (PLEG): container finished" podID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerID="269364b51e44a84af09018313383df30b0914867186361e7273ef6697cc6aad7" exitCode=143 Jan 26 19:03:31 crc kubenswrapper[4770]: I0126 19:03:31.038660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerDied","Data":"269364b51e44a84af09018313383df30b0914867186361e7273ef6697cc6aad7"} Jan 26 19:03:31 crc kubenswrapper[4770]: I0126 19:03:31.440257 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 19:03:33 crc kubenswrapper[4770]: I0126 19:03:33.057417 4770 generic.go:334] "Generic (PLEG): container finished" podID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerID="4705719a97fcaaa1fbf1180abe94cffd0cc6f7b492dcbeacdf5f1a4d4a4363d2" exitCode=0 Jan 26 19:03:33 crc kubenswrapper[4770]: I0126 19:03:33.057484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerDied","Data":"4705719a97fcaaa1fbf1180abe94cffd0cc6f7b492dcbeacdf5f1a4d4a4363d2"} Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.419822 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556402 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556470 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556555 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556606 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s45lg\" (UniqueName: \"kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556623 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556668 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556690 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.556725 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run\") pod \"ece838e9-4831-4ff8-abac-6e7a228c76a0\" (UID: \"ece838e9-4831-4ff8-abac-6e7a228c76a0\") " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.557081 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs" (OuterVolumeSpecName: "logs") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.557310 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.557598 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.557621 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ece838e9-4831-4ff8-abac-6e7a228c76a0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.565216 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg" (OuterVolumeSpecName: "kube-api-access-s45lg") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "kube-api-access-s45lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.565243 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts" (OuterVolumeSpecName: "scripts") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.565600 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.579641 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:35 crc kubenswrapper[4770]: W0126 19:03:35.581793 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17ebd03a_d44b_4637_bd94_8ae3f9259a8b.slice/crio-a0c5aed186fcc5782f80ba1aae9d6038cdd00a893f5f67b1b9bd7fb75a5abf6f WatchSource:0}: Error finding container a0c5aed186fcc5782f80ba1aae9d6038cdd00a893f5f67b1b9bd7fb75a5abf6f: Status 404 returned error can't find the container with id a0c5aed186fcc5782f80ba1aae9d6038cdd00a893f5f67b1b9bd7fb75a5abf6f Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.608665 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.629086 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data" (OuterVolumeSpecName: "config-data") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.633820 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ece838e9-4831-4ff8-abac-6e7a228c76a0" (UID: "ece838e9-4831-4ff8-abac-6e7a228c76a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662410 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s45lg\" (UniqueName: \"kubernetes.io/projected/ece838e9-4831-4ff8-abac-6e7a228c76a0-kube-api-access-s45lg\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662442 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662470 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662488 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662499 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.662508 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ece838e9-4831-4ff8-abac-6e7a228c76a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.704538 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 19:03:35 crc kubenswrapper[4770]: I0126 19:03:35.764318 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.087457 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ece838e9-4831-4ff8-abac-6e7a228c76a0","Type":"ContainerDied","Data":"11fd1e788fb6dca44e501fbf20aee6153ca379390fff6e788d040357c7fcd1a2"} Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.087525 4770 scope.go:117] "RemoveContainer" containerID="4705719a97fcaaa1fbf1180abe94cffd0cc6f7b492dcbeacdf5f1a4d4a4363d2" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.087719 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.093053 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerStarted","Data":"c7b3f2ac1be125b3f9ae7e06879c52e644326b64c19f0ce1575ba698014177a3"} Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.093094 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerStarted","Data":"f3b401a94234b44f8e458c39c907e9ee963d5584beb159b9d6d1dc90acae84e0"} Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.093104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerStarted","Data":"a0c5aed186fcc5782f80ba1aae9d6038cdd00a893f5f67b1b9bd7fb75a5abf6f"} Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.094802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" event={"ID":"c6c85e81-cbf1-4b3e-9012-f8f10e74021e","Type":"ContainerStarted","Data":"8c1d41495873e40bf971a45bdd81011b56d5b0bd8239fcac41c6ebfd740533e3"} Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.116973 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.129113 4770 scope.go:117] "RemoveContainer" containerID="269364b51e44a84af09018313383df30b0914867186361e7273ef6697cc6aad7" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.135181 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.155629 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" podStartSLOduration=2.700325954 podStartE2EDuration="12.155611472s" podCreationTimestamp="2026-01-26 19:03:24 +0000 UTC" firstStartedPulling="2026-01-26 19:03:25.61640237 +0000 UTC m=+1290.181309102" lastFinishedPulling="2026-01-26 19:03:35.071687848 +0000 UTC m=+1299.636594620" observedRunningTime="2026-01-26 19:03:36.140385774 +0000 UTC m=+1300.705292506" watchObservedRunningTime="2026-01-26 19:03:36.155611472 +0000 UTC m=+1300.720518204" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.155974 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:36 crc kubenswrapper[4770]: E0126 19:03:36.156386 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-httpd" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.156398 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-httpd" Jan 26 19:03:36 crc kubenswrapper[4770]: E0126 19:03:36.156410 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-log" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.156416 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-log" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.156585 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-log" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.156600 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" containerName="glance-httpd" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.157571 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.161356 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.166813 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.205384 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.275988 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276101 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276256 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276281 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276420 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjmh\" (UniqueName: \"kubernetes.io/projected/93320a1f-7ced-4765-95a5-918a8fa2de1c-kube-api-access-8pjmh\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.276572 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.377841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.377885 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.377955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.377976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378108 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378151 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjmh\" (UniqueName: \"kubernetes.io/projected/93320a1f-7ced-4765-95a5-918a8fa2de1c-kube-api-access-8pjmh\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.378965 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.381249 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93320a1f-7ced-4765-95a5-918a8fa2de1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.384579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.384953 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.384993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.398000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93320a1f-7ced-4765-95a5-918a8fa2de1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.405436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjmh\" (UniqueName: \"kubernetes.io/projected/93320a1f-7ced-4765-95a5-918a8fa2de1c-kube-api-access-8pjmh\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.440187 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"93320a1f-7ced-4765-95a5-918a8fa2de1c\") " pod="openstack/glance-default-internal-api-0" Jan 26 19:03:36 crc kubenswrapper[4770]: I0126 19:03:36.487540 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:37 crc kubenswrapper[4770]: I0126 19:03:37.037807 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 19:03:37 crc kubenswrapper[4770]: W0126 19:03:37.047878 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93320a1f_7ced_4765_95a5_918a8fa2de1c.slice/crio-474252531a46ab02eb8e98bbf0c460a69c5d9c9b8db170cb89bc5d7d519cb1a9 WatchSource:0}: Error finding container 474252531a46ab02eb8e98bbf0c460a69c5d9c9b8db170cb89bc5d7d519cb1a9: Status 404 returned error can't find the container with id 474252531a46ab02eb8e98bbf0c460a69c5d9c9b8db170cb89bc5d7d519cb1a9 Jan 26 19:03:37 crc kubenswrapper[4770]: I0126 19:03:37.109181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerStarted","Data":"2bbe05da1c33c3d935ce8624f8f68e65bfc2fbcfd622dc7b14c0c5c7cb6ef7a4"} Jan 26 19:03:37 crc kubenswrapper[4770]: I0126 19:03:37.126667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93320a1f-7ced-4765-95a5-918a8fa2de1c","Type":"ContainerStarted","Data":"474252531a46ab02eb8e98bbf0c460a69c5d9c9b8db170cb89bc5d7d519cb1a9"} Jan 26 19:03:37 crc kubenswrapper[4770]: I0126 19:03:37.128550 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:03:37 crc kubenswrapper[4770]: I0126 19:03:37.792610 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece838e9-4831-4ff8-abac-6e7a228c76a0" path="/var/lib/kubelet/pods/ece838e9-4831-4ff8-abac-6e7a228c76a0/volumes" Jan 26 19:03:38 crc kubenswrapper[4770]: I0126 19:03:38.148511 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93320a1f-7ced-4765-95a5-918a8fa2de1c","Type":"ContainerStarted","Data":"871c3b4148853c99652d42d7eb740dc44f92289709b98efabc35bca09e7847b7"} Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.167954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerStarted","Data":"fa31a01e0a8bf121c66be26b9dee474ff491b32f8ce31c4386e4634089250e22"} Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.168429 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-central-agent" containerID="cri-o://f3b401a94234b44f8e458c39c907e9ee963d5584beb159b9d6d1dc90acae84e0" gracePeriod=30 Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.168711 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.168983 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="proxy-httpd" containerID="cri-o://fa31a01e0a8bf121c66be26b9dee474ff491b32f8ce31c4386e4634089250e22" gracePeriod=30 Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.169028 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="sg-core" containerID="cri-o://2bbe05da1c33c3d935ce8624f8f68e65bfc2fbcfd622dc7b14c0c5c7cb6ef7a4" gracePeriod=30 Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.169069 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-notification-agent" containerID="cri-o://c7b3f2ac1be125b3f9ae7e06879c52e644326b64c19f0ce1575ba698014177a3" gracePeriod=30 Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.172458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93320a1f-7ced-4765-95a5-918a8fa2de1c","Type":"ContainerStarted","Data":"ee4dee7925a21f3915c9ac44a4f8f35ae5caccd7651f1f6ea727c405af0e67f6"} Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.206455 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.379792798 podStartE2EDuration="11.206433806s" podCreationTimestamp="2026-01-26 19:03:28 +0000 UTC" firstStartedPulling="2026-01-26 19:03:35.58451297 +0000 UTC m=+1300.149419702" lastFinishedPulling="2026-01-26 19:03:38.411153978 +0000 UTC m=+1302.976060710" observedRunningTime="2026-01-26 19:03:39.19709533 +0000 UTC m=+1303.762002062" watchObservedRunningTime="2026-01-26 19:03:39.206433806 +0000 UTC m=+1303.771340538" Jan 26 19:03:39 crc kubenswrapper[4770]: I0126 19:03:39.222810 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.222795616 podStartE2EDuration="3.222795616s" podCreationTimestamp="2026-01-26 19:03:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:39.221982633 +0000 UTC m=+1303.786889365" watchObservedRunningTime="2026-01-26 19:03:39.222795616 +0000 UTC m=+1303.787702348" Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.191913 4770 generic.go:334] "Generic (PLEG): container finished" podID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerID="fa31a01e0a8bf121c66be26b9dee474ff491b32f8ce31c4386e4634089250e22" exitCode=0 Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.192282 4770 generic.go:334] "Generic (PLEG): container finished" podID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerID="2bbe05da1c33c3d935ce8624f8f68e65bfc2fbcfd622dc7b14c0c5c7cb6ef7a4" exitCode=2 Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.192300 4770 generic.go:334] "Generic (PLEG): container finished" podID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerID="c7b3f2ac1be125b3f9ae7e06879c52e644326b64c19f0ce1575ba698014177a3" exitCode=0 Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.193657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerDied","Data":"fa31a01e0a8bf121c66be26b9dee474ff491b32f8ce31c4386e4634089250e22"} Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.193759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerDied","Data":"2bbe05da1c33c3d935ce8624f8f68e65bfc2fbcfd622dc7b14c0c5c7cb6ef7a4"} Jan 26 19:03:40 crc kubenswrapper[4770]: I0126 19:03:40.193791 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerDied","Data":"c7b3f2ac1be125b3f9ae7e06879c52e644326b64c19f0ce1575ba698014177a3"} Jan 26 19:03:41 crc kubenswrapper[4770]: I0126 19:03:41.018659 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:41 crc kubenswrapper[4770]: I0126 19:03:41.019801 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-log" containerID="cri-o://50abb1bee56ee2afd5d4c9d2af80fbe4a1d67cc0ac1abd6b23ed7fc939b9880f" gracePeriod=30 Jan 26 19:03:41 crc kubenswrapper[4770]: I0126 19:03:41.019967 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-httpd" containerID="cri-o://6bd1d45a5d8ccbafc12d51c71edd786325e8f0e5594ea365165c6be157504471" gracePeriod=30 Jan 26 19:03:41 crc kubenswrapper[4770]: I0126 19:03:41.204476 4770 generic.go:334] "Generic (PLEG): container finished" podID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerID="50abb1bee56ee2afd5d4c9d2af80fbe4a1d67cc0ac1abd6b23ed7fc939b9880f" exitCode=143 Jan 26 19:03:41 crc kubenswrapper[4770]: I0126 19:03:41.204569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerDied","Data":"50abb1bee56ee2afd5d4c9d2af80fbe4a1d67cc0ac1abd6b23ed7fc939b9880f"} Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.236592 4770 generic.go:334] "Generic (PLEG): container finished" podID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerID="6bd1d45a5d8ccbafc12d51c71edd786325e8f0e5594ea365165c6be157504471" exitCode=0 Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.236643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerDied","Data":"6bd1d45a5d8ccbafc12d51c71edd786325e8f0e5594ea365165c6be157504471"} Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.403789 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509656 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcm72\" (UniqueName: \"kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509834 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509945 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509964 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.509985 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.510033 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.510053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs\") pod \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\" (UID: \"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5\") " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.510337 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.510449 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.510538 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs" (OuterVolumeSpecName: "logs") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.515817 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72" (OuterVolumeSpecName: "kube-api-access-mcm72") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "kube-api-access-mcm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.521493 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts" (OuterVolumeSpecName: "scripts") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.553197 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.553981 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.572930 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.594847 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data" (OuterVolumeSpecName: "config-data") pod "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" (UID: "1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.612526 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.612810 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.612906 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcm72\" (UniqueName: \"kubernetes.io/projected/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-kube-api-access-mcm72\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.612992 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.613070 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.613150 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.613302 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.641568 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 26 19:03:42 crc kubenswrapper[4770]: I0126 19:03:42.714562 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.246563 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5","Type":"ContainerDied","Data":"d9d4d4c1c094a73473f510a94040db6c954d66434dd6c0b068d6907d5fffe243"} Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.246918 4770 scope.go:117] "RemoveContainer" containerID="6bd1d45a5d8ccbafc12d51c71edd786325e8f0e5594ea365165c6be157504471" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.246849 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.283677 4770 scope.go:117] "RemoveContainer" containerID="50abb1bee56ee2afd5d4c9d2af80fbe4a1d67cc0ac1abd6b23ed7fc939b9880f" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.291951 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.305046 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.318784 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:43 crc kubenswrapper[4770]: E0126 19:03:43.319946 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-httpd" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.319969 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-httpd" Jan 26 19:03:43 crc kubenswrapper[4770]: E0126 19:03:43.320003 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-log" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.320011 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-log" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.320193 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-log" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.320211 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" containerName="glance-httpd" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.321197 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.324045 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.327085 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.337355 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428283 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428394 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-logs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428480 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgw7j\" (UniqueName: \"kubernetes.io/projected/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-kube-api-access-tgw7j\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428514 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.428623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530680 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530803 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530854 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-logs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530884 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgw7j\" (UniqueName: \"kubernetes.io/projected/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-kube-api-access-tgw7j\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530912 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530968 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.530996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.531265 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.536321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-logs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.537064 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.537553 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.538113 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.538076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.540406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.557014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgw7j\" (UniqueName: \"kubernetes.io/projected/c98b34c0-4fc9-4b79-b664-bbc8ddb787a1-kube-api-access-tgw7j\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.568525 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1\") " pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.655028 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 19:03:43 crc kubenswrapper[4770]: I0126 19:03:43.786540 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5" path="/var/lib/kubelet/pods/1fa33abe-5193-463c-b9c8-b3e4a6c6f0d5/volumes" Jan 26 19:03:44 crc kubenswrapper[4770]: I0126 19:03:44.195930 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 19:03:44 crc kubenswrapper[4770]: I0126 19:03:44.259832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1","Type":"ContainerStarted","Data":"cd3d70701c24370708059f327c869865259c0f15281d1aec6bbba39efc410da4"} Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.280226 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1","Type":"ContainerStarted","Data":"54cb2eb7b92d7b1622103b2ae056aa224512fc6e32f4df5ff4849b11c75f5cf7"} Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.281046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c98b34c0-4fc9-4b79-b664-bbc8ddb787a1","Type":"ContainerStarted","Data":"9e3025bd1663579678425fee30fbdbeb291202f2d42fd28254bb73f8ab0f9c72"} Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.283134 4770 generic.go:334] "Generic (PLEG): container finished" podID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerID="f3b401a94234b44f8e458c39c907e9ee963d5584beb159b9d6d1dc90acae84e0" exitCode=0 Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.283165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerDied","Data":"f3b401a94234b44f8e458c39c907e9ee963d5584beb159b9d6d1dc90acae84e0"} Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.316804 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.316787555 podStartE2EDuration="3.316787555s" podCreationTimestamp="2026-01-26 19:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:46.312082205 +0000 UTC m=+1310.876988937" watchObservedRunningTime="2026-01-26 19:03:46.316787555 +0000 UTC m=+1310.881694287" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.398304 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.489675 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.489731 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494307 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494664 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494826 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk27j\" (UniqueName: \"kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494937 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.494992 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle\") pod \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\" (UID: \"17ebd03a-d44b-4637-bd94-8ae3f9259a8b\") " Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.498968 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.502770 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.505815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts" (OuterVolumeSpecName: "scripts") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.510241 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j" (OuterVolumeSpecName: "kube-api-access-rk27j") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "kube-api-access-rk27j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.526052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.526835 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.549367 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.585040 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596678 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596731 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk27j\" (UniqueName: \"kubernetes.io/projected/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-kube-api-access-rk27j\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596740 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596749 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596780 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.596789 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.608690 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data" (OuterVolumeSpecName: "config-data") pod "17ebd03a-d44b-4637-bd94-8ae3f9259a8b" (UID: "17ebd03a-d44b-4637-bd94-8ae3f9259a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:46 crc kubenswrapper[4770]: I0126 19:03:46.699800 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17ebd03a-d44b-4637-bd94-8ae3f9259a8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.294965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17ebd03a-d44b-4637-bd94-8ae3f9259a8b","Type":"ContainerDied","Data":"a0c5aed186fcc5782f80ba1aae9d6038cdd00a893f5f67b1b9bd7fb75a5abf6f"} Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.295021 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.295049 4770 scope.go:117] "RemoveContainer" containerID="fa31a01e0a8bf121c66be26b9dee474ff491b32f8ce31c4386e4634089250e22" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.296726 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.296906 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.315047 4770 scope.go:117] "RemoveContainer" containerID="2bbe05da1c33c3d935ce8624f8f68e65bfc2fbcfd622dc7b14c0c5c7cb6ef7a4" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.332218 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.347553 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.351146 4770 scope.go:117] "RemoveContainer" containerID="c7b3f2ac1be125b3f9ae7e06879c52e644326b64c19f0ce1575ba698014177a3" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.363781 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:47 crc kubenswrapper[4770]: E0126 19:03:47.364117 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="sg-core" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364128 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="sg-core" Jan 26 19:03:47 crc kubenswrapper[4770]: E0126 19:03:47.364141 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="proxy-httpd" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364146 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="proxy-httpd" Jan 26 19:03:47 crc kubenswrapper[4770]: E0126 19:03:47.364160 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-notification-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364166 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-notification-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: E0126 19:03:47.364179 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-central-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364185 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-central-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364347 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="proxy-httpd" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364365 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="sg-core" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364379 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-notification-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.364391 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" containerName="ceilometer-central-agent" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.366160 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.378109 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.379010 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.395595 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.396799 4770 scope.go:117] "RemoveContainer" containerID="f3b401a94234b44f8e458c39c907e9ee963d5584beb159b9d6d1dc90acae84e0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515565 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515672 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4wx\" (UniqueName: \"kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515731 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515850 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.515918 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.617952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618011 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4wx\" (UniqueName: \"kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618063 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618113 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618266 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618322 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.618965 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.620175 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.627343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.627548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.628552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.634000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.643107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4wx\" (UniqueName: \"kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx\") pod \"ceilometer-0\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.691647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:03:47 crc kubenswrapper[4770]: I0126 19:03:47.785293 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ebd03a-d44b-4637-bd94-8ae3f9259a8b" path="/var/lib/kubelet/pods/17ebd03a-d44b-4637-bd94-8ae3f9259a8b/volumes" Jan 26 19:03:48 crc kubenswrapper[4770]: I0126 19:03:48.196267 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:48 crc kubenswrapper[4770]: W0126 19:03:48.198125 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49a4d97b_c448_4f1a_b898_ebff4a739deb.slice/crio-07976d188bca7d9119aadf3a2aa85616c4aea3d2dd7f038f7f04915336d28467 WatchSource:0}: Error finding container 07976d188bca7d9119aadf3a2aa85616c4aea3d2dd7f038f7f04915336d28467: Status 404 returned error can't find the container with id 07976d188bca7d9119aadf3a2aa85616c4aea3d2dd7f038f7f04915336d28467 Jan 26 19:03:48 crc kubenswrapper[4770]: I0126 19:03:48.307611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerStarted","Data":"07976d188bca7d9119aadf3a2aa85616c4aea3d2dd7f038f7f04915336d28467"} Jan 26 19:03:49 crc kubenswrapper[4770]: I0126 19:03:49.222801 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:49 crc kubenswrapper[4770]: I0126 19:03:49.224865 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 19:03:49 crc kubenswrapper[4770]: I0126 19:03:49.333235 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerStarted","Data":"05f019326102235f6b7b6c8240100c5e8aa5d87e0cb41c12216c8d152d5ef4a1"} Jan 26 19:03:49 crc kubenswrapper[4770]: I0126 19:03:49.333571 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerStarted","Data":"07072567434bd1cdd1878855cb0f01e59056dce5f28eaef82384cee25520a70b"} Jan 26 19:03:50 crc kubenswrapper[4770]: I0126 19:03:50.344124 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerStarted","Data":"f95b40d5ab0abb9c7613558157e934151d43e593ac211029126db6413b4100f6"} Jan 26 19:03:51 crc kubenswrapper[4770]: I0126 19:03:51.353502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerStarted","Data":"65529df6bef3a771240166596e5d55ce138b19388afa0f98651c3e7f72572ebb"} Jan 26 19:03:51 crc kubenswrapper[4770]: I0126 19:03:51.353902 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:03:51 crc kubenswrapper[4770]: I0126 19:03:51.355319 4770 generic.go:334] "Generic (PLEG): container finished" podID="c6c85e81-cbf1-4b3e-9012-f8f10e74021e" containerID="8c1d41495873e40bf971a45bdd81011b56d5b0bd8239fcac41c6ebfd740533e3" exitCode=0 Jan 26 19:03:51 crc kubenswrapper[4770]: I0126 19:03:51.355365 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" event={"ID":"c6c85e81-cbf1-4b3e-9012-f8f10e74021e","Type":"ContainerDied","Data":"8c1d41495873e40bf971a45bdd81011b56d5b0bd8239fcac41c6ebfd740533e3"} Jan 26 19:03:51 crc kubenswrapper[4770]: I0126 19:03:51.386633 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.150508928 podStartE2EDuration="4.386612161s" podCreationTimestamp="2026-01-26 19:03:47 +0000 UTC" firstStartedPulling="2026-01-26 19:03:48.200062669 +0000 UTC m=+1312.764969401" lastFinishedPulling="2026-01-26 19:03:50.436165902 +0000 UTC m=+1315.001072634" observedRunningTime="2026-01-26 19:03:51.373088839 +0000 UTC m=+1315.937995571" watchObservedRunningTime="2026-01-26 19:03:51.386612161 +0000 UTC m=+1315.951518893" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.771332 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.852219 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data\") pod \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.857117 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle\") pod \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.857253 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts\") pod \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.857390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbtbp\" (UniqueName: \"kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp\") pod \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\" (UID: \"c6c85e81-cbf1-4b3e-9012-f8f10e74021e\") " Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.864374 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts" (OuterVolumeSpecName: "scripts") pod "c6c85e81-cbf1-4b3e-9012-f8f10e74021e" (UID: "c6c85e81-cbf1-4b3e-9012-f8f10e74021e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.871633 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp" (OuterVolumeSpecName: "kube-api-access-tbtbp") pod "c6c85e81-cbf1-4b3e-9012-f8f10e74021e" (UID: "c6c85e81-cbf1-4b3e-9012-f8f10e74021e"). InnerVolumeSpecName "kube-api-access-tbtbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.885880 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data" (OuterVolumeSpecName: "config-data") pod "c6c85e81-cbf1-4b3e-9012-f8f10e74021e" (UID: "c6c85e81-cbf1-4b3e-9012-f8f10e74021e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.905947 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c85e81-cbf1-4b3e-9012-f8f10e74021e" (UID: "c6c85e81-cbf1-4b3e-9012-f8f10e74021e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.962767 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.962800 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.962810 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbtbp\" (UniqueName: \"kubernetes.io/projected/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-kube-api-access-tbtbp\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:52 crc kubenswrapper[4770]: I0126 19:03:52.962821 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c85e81-cbf1-4b3e-9012-f8f10e74021e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.390207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" event={"ID":"c6c85e81-cbf1-4b3e-9012-f8f10e74021e","Type":"ContainerDied","Data":"196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc"} Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.390254 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196544462bd313959a8fe286df4a4e2ec7163d92de206f9023663d4a4c30b5dc" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.390268 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2gtrl" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.513457 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:53 crc kubenswrapper[4770]: E0126 19:03:53.514186 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c85e81-cbf1-4b3e-9012-f8f10e74021e" containerName="nova-cell0-conductor-db-sync" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.514222 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c85e81-cbf1-4b3e-9012-f8f10e74021e" containerName="nova-cell0-conductor-db-sync" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.514533 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c85e81-cbf1-4b3e-9012-f8f10e74021e" containerName="nova-cell0-conductor-db-sync" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.515651 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.519641 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.519816 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-krbbg" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.568845 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.655692 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.655765 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.684731 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.693497 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldm5\" (UniqueName: \"kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.693552 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.693663 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.708014 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.795126 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldm5\" (UniqueName: \"kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.795177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.795217 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.802313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.802822 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.815567 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldm5\" (UniqueName: \"kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5\") pod \"nova-cell0-conductor-0\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:53 crc kubenswrapper[4770]: I0126 19:03:53.843480 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:54 crc kubenswrapper[4770]: I0126 19:03:54.292685 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:54 crc kubenswrapper[4770]: W0126 19:03:54.300989 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dbdedd9_6112_426d_9160_6f6785775066.slice/crio-4ba3985cee90224cc5be491ca8096b762e5331e90999f9f26b261e0c87d4db16 WatchSource:0}: Error finding container 4ba3985cee90224cc5be491ca8096b762e5331e90999f9f26b261e0c87d4db16: Status 404 returned error can't find the container with id 4ba3985cee90224cc5be491ca8096b762e5331e90999f9f26b261e0c87d4db16 Jan 26 19:03:54 crc kubenswrapper[4770]: I0126 19:03:54.404575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dbdedd9-6112-426d-9160-6f6785775066","Type":"ContainerStarted","Data":"4ba3985cee90224cc5be491ca8096b762e5331e90999f9f26b261e0c87d4db16"} Jan 26 19:03:54 crc kubenswrapper[4770]: I0126 19:03:54.404637 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:03:54 crc kubenswrapper[4770]: I0126 19:03:54.404838 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 19:03:55 crc kubenswrapper[4770]: I0126 19:03:55.413372 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dbdedd9-6112-426d-9160-6f6785775066","Type":"ContainerStarted","Data":"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6"} Jan 26 19:03:55 crc kubenswrapper[4770]: I0126 19:03:55.413956 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:55 crc kubenswrapper[4770]: I0126 19:03:55.432044 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.432030006 podStartE2EDuration="2.432030006s" podCreationTimestamp="2026-01-26 19:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:03:55.428434067 +0000 UTC m=+1319.993340799" watchObservedRunningTime="2026-01-26 19:03:55.432030006 +0000 UTC m=+1319.996936738" Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.159773 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.178322 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.178836 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" containerID="cri-o://faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f" gracePeriod=30 Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.421196 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.421223 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.466336 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:03:56 crc kubenswrapper[4770]: I0126 19:03:56.546175 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 19:03:57 crc kubenswrapper[4770]: I0126 19:03:57.428390 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2dbdedd9-6112-426d-9160-6f6785775066" containerName="nova-cell0-conductor-conductor" containerID="cri-o://29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6" gracePeriod=30 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.099091 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.099827 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-central-agent" containerID="cri-o://07072567434bd1cdd1878855cb0f01e59056dce5f28eaef82384cee25520a70b" gracePeriod=30 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.099950 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="sg-core" containerID="cri-o://f95b40d5ab0abb9c7613558157e934151d43e593ac211029126db6413b4100f6" gracePeriod=30 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.099987 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-notification-agent" containerID="cri-o://05f019326102235f6b7b6c8240100c5e8aa5d87e0cb41c12216c8d152d5ef4a1" gracePeriod=30 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.100135 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="proxy-httpd" containerID="cri-o://65529df6bef3a771240166596e5d55ce138b19388afa0f98651c3e7f72572ebb" gracePeriod=30 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.367276 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.437991 4770 generic.go:334] "Generic (PLEG): container finished" podID="2dbdedd9-6112-426d-9160-6f6785775066" containerID="29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6" exitCode=0 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.438039 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.438064 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dbdedd9-6112-426d-9160-6f6785775066","Type":"ContainerDied","Data":"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6"} Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.438097 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2dbdedd9-6112-426d-9160-6f6785775066","Type":"ContainerDied","Data":"4ba3985cee90224cc5be491ca8096b762e5331e90999f9f26b261e0c87d4db16"} Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.438114 4770 scope.go:117] "RemoveContainer" containerID="29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.441903 4770 generic.go:334] "Generic (PLEG): container finished" podID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerID="65529df6bef3a771240166596e5d55ce138b19388afa0f98651c3e7f72572ebb" exitCode=0 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.441932 4770 generic.go:334] "Generic (PLEG): container finished" podID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerID="f95b40d5ab0abb9c7613558157e934151d43e593ac211029126db6413b4100f6" exitCode=2 Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.442428 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerDied","Data":"65529df6bef3a771240166596e5d55ce138b19388afa0f98651c3e7f72572ebb"} Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.442483 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerDied","Data":"f95b40d5ab0abb9c7613558157e934151d43e593ac211029126db6413b4100f6"} Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.459407 4770 scope.go:117] "RemoveContainer" containerID="29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6" Jan 26 19:03:58 crc kubenswrapper[4770]: E0126 19:03:58.459879 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6\": container with ID starting with 29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6 not found: ID does not exist" containerID="29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.459931 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6"} err="failed to get container status \"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6\": rpc error: code = NotFound desc = could not find container \"29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6\": container with ID starting with 29b55eaec3f9f1ba961f234d4b2070a025ef1e50d492696d3657845e50711dd6 not found: ID does not exist" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.501317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") pod \"2dbdedd9-6112-426d-9160-6f6785775066\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.501451 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data\") pod \"2dbdedd9-6112-426d-9160-6f6785775066\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.501503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rldm5\" (UniqueName: \"kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5\") pod \"2dbdedd9-6112-426d-9160-6f6785775066\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.509476 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5" (OuterVolumeSpecName: "kube-api-access-rldm5") pod "2dbdedd9-6112-426d-9160-6f6785775066" (UID: "2dbdedd9-6112-426d-9160-6f6785775066"). InnerVolumeSpecName "kube-api-access-rldm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:58 crc kubenswrapper[4770]: E0126 19:03:58.539014 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle podName:2dbdedd9-6112-426d-9160-6f6785775066 nodeName:}" failed. No retries permitted until 2026-01-26 19:03:59.038985742 +0000 UTC m=+1323.603892494 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle") pod "2dbdedd9-6112-426d-9160-6f6785775066" (UID: "2dbdedd9-6112-426d-9160-6f6785775066") : error deleting /var/lib/kubelet/pods/2dbdedd9-6112-426d-9160-6f6785775066/volume-subpaths: remove /var/lib/kubelet/pods/2dbdedd9-6112-426d-9160-6f6785775066/volume-subpaths: no such file or directory Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.542383 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data" (OuterVolumeSpecName: "config-data") pod "2dbdedd9-6112-426d-9160-6f6785775066" (UID: "2dbdedd9-6112-426d-9160-6f6785775066"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.603627 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.603679 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rldm5\" (UniqueName: \"kubernetes.io/projected/2dbdedd9-6112-426d-9160-6f6785775066-kube-api-access-rldm5\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:58 crc kubenswrapper[4770]: I0126 19:03:58.888401 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.010570 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle\") pod \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.010625 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs\") pod \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.010680 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca\") pod \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.010756 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pptb6\" (UniqueName: \"kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6\") pod \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.010880 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data\") pod \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\" (UID: \"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.011107 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs" (OuterVolumeSpecName: "logs") pod "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" (UID: "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.011668 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.015826 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6" (OuterVolumeSpecName: "kube-api-access-pptb6") pod "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" (UID: "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444"). InnerVolumeSpecName "kube-api-access-pptb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.039594 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" (UID: "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.047646 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" (UID: "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.079194 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data" (OuterVolumeSpecName: "config-data") pod "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" (UID: "ba7a2e1d-7c6b-4d89-ac01-5a93fb071444"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.112994 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") pod \"2dbdedd9-6112-426d-9160-6f6785775066\" (UID: \"2dbdedd9-6112-426d-9160-6f6785775066\") " Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.113480 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.113500 4770 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.113514 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pptb6\" (UniqueName: \"kubernetes.io/projected/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-kube-api-access-pptb6\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.113528 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.115829 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dbdedd9-6112-426d-9160-6f6785775066" (UID: "2dbdedd9-6112-426d-9160-6f6785775066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.215958 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dbdedd9-6112-426d-9160-6f6785775066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.414338 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.436459 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.456553 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.456932 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.456945 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.456965 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.456971 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.456982 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.456988 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.456995 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457001 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.457013 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbdedd9-6112-426d-9160-6f6785775066" containerName="nova-cell0-conductor-conductor" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457019 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbdedd9-6112-426d-9160-6f6785775066" containerName="nova-cell0-conductor-conductor" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457197 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbdedd9-6112-426d-9160-6f6785775066" containerName="nova-cell0-conductor-conductor" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457211 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457222 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457234 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.457820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460133 4770 generic.go:334] "Generic (PLEG): container finished" podID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerID="faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f" exitCode=0 Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460209 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerDied","Data":"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f"} Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ba7a2e1d-7c6b-4d89-ac01-5a93fb071444","Type":"ContainerDied","Data":"a8e49ec4068a96558d84b99cb49cf8ed9f9175f2e8592a87dc625685d6e0d506"} Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460256 4770 scope.go:117] "RemoveContainer" containerID="faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460404 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460610 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.460773 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-krbbg" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.466569 4770 generic.go:334] "Generic (PLEG): container finished" podID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerID="07072567434bd1cdd1878855cb0f01e59056dce5f28eaef82384cee25520a70b" exitCode=0 Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.466658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerDied","Data":"07072567434bd1cdd1878855cb0f01e59056dce5f28eaef82384cee25520a70b"} Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.475983 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.496580 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.526642 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.526771 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.526820 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gxz\" (UniqueName: \"kubernetes.io/projected/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-kube-api-access-49gxz\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.531143 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.545007 4770 scope.go:117] "RemoveContainer" containerID="faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.545630 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f\": container with ID starting with faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f not found: ID does not exist" containerID="faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.545776 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f"} err="failed to get container status \"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f\": rpc error: code = NotFound desc = could not find container \"faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f\": container with ID starting with faf65f76bdb6e19cc2b1951a2167c47ef0772a6ce073e6cdb7664f69c701ab6f not found: ID does not exist" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.545885 4770 scope.go:117] "RemoveContainer" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:03:59 crc kubenswrapper[4770]: E0126 19:03:59.546222 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa\": container with ID starting with b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa not found: ID does not exist" containerID="b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.546328 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa"} err="failed to get container status \"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa\": rpc error: code = NotFound desc = could not find container \"b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa\": container with ID starting with b6b67533c1cae00c0080331461b986299658b9f2cab0510963ce8923db4f6dfa not found: ID does not exist" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.551739 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.559016 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.560260 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" containerName="watcher-decision-engine" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.561167 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.563002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.568679 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.628537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gxz\" (UniqueName: \"kubernetes.io/projected/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-kube-api-access-49gxz\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.628916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6jv\" (UniqueName: \"kubernetes.io/projected/e9760499-8609-4691-b587-2265122f7af7-kube-api-access-nq6jv\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.629119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.629625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9760499-8609-4691-b587-2265122f7af7-logs\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.629932 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.629963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.630020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.630168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.635895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.636087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.651011 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gxz\" (UniqueName: \"kubernetes.io/projected/7b7b00b0-e2fe-4012-8d42-ed69e1345f94-kube-api-access-49gxz\") pod \"nova-cell0-conductor-0\" (UID: \"7b7b00b0-e2fe-4012-8d42-ed69e1345f94\") " pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732158 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9760499-8609-4691-b587-2265122f7af7-logs\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732316 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6jv\" (UniqueName: \"kubernetes.io/projected/e9760499-8609-4691-b587-2265122f7af7-kube-api-access-nq6jv\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.732629 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9760499-8609-4691-b587-2265122f7af7-logs\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.736150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.736955 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.737134 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e9760499-8609-4691-b587-2265122f7af7-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.749416 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6jv\" (UniqueName: \"kubernetes.io/projected/e9760499-8609-4691-b587-2265122f7af7-kube-api-access-nq6jv\") pod \"watcher-decision-engine-0\" (UID: \"e9760499-8609-4691-b587-2265122f7af7\") " pod="openstack/watcher-decision-engine-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.778241 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbdedd9-6112-426d-9160-6f6785775066" path="/var/lib/kubelet/pods/2dbdedd9-6112-426d-9160-6f6785775066/volumes" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.779147 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7a2e1d-7c6b-4d89-ac01-5a93fb071444" path="/var/lib/kubelet/pods/ba7a2e1d-7c6b-4d89-ac01-5a93fb071444/volumes" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.787901 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 19:03:59 crc kubenswrapper[4770]: I0126 19:03:59.879963 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.258572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.330367 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.330433 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:04:00 crc kubenswrapper[4770]: W0126 19:04:00.368790 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9760499_8609_4691_b587_2265122f7af7.slice/crio-e36f414790436ded80492f63dc342686e27402bf25fa671cef7126a51276efce WatchSource:0}: Error finding container e36f414790436ded80492f63dc342686e27402bf25fa671cef7126a51276efce: Status 404 returned error can't find the container with id e36f414790436ded80492f63dc342686e27402bf25fa671cef7126a51276efce Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.373234 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.485308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7b7b00b0-e2fe-4012-8d42-ed69e1345f94","Type":"ContainerStarted","Data":"f717a2898dba98c6beb9294aee5adcab8054f58e1c16d0fd014efc454bd7853d"} Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.486723 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e9760499-8609-4691-b587-2265122f7af7","Type":"ContainerStarted","Data":"e36f414790436ded80492f63dc342686e27402bf25fa671cef7126a51276efce"} Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.489570 4770 generic.go:334] "Generic (PLEG): container finished" podID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerID="05f019326102235f6b7b6c8240100c5e8aa5d87e0cb41c12216c8d152d5ef4a1" exitCode=0 Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.489604 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerDied","Data":"05f019326102235f6b7b6c8240100c5e8aa5d87e0cb41c12216c8d152d5ef4a1"} Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.699445 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751080 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751430 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751483 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751576 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc4wx\" (UniqueName: \"kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751766 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.751823 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle\") pod \"49a4d97b-c448-4f1a-b898-ebff4a739deb\" (UID: \"49a4d97b-c448-4f1a-b898-ebff4a739deb\") " Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.752244 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.753059 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.756050 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts" (OuterVolumeSpecName: "scripts") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.756794 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx" (OuterVolumeSpecName: "kube-api-access-dc4wx") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "kube-api-access-dc4wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.781648 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.832193 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.842552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data" (OuterVolumeSpecName: "config-data") pod "49a4d97b-c448-4f1a-b898-ebff4a739deb" (UID: "49a4d97b-c448-4f1a-b898-ebff4a739deb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855011 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855042 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855051 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855060 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855068 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49a4d97b-c448-4f1a-b898-ebff4a739deb-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855076 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc4wx\" (UniqueName: \"kubernetes.io/projected/49a4d97b-c448-4f1a-b898-ebff4a739deb-kube-api-access-dc4wx\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:00 crc kubenswrapper[4770]: I0126 19:04:00.855088 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49a4d97b-c448-4f1a-b898-ebff4a739deb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.503975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7b7b00b0-e2fe-4012-8d42-ed69e1345f94","Type":"ContainerStarted","Data":"1649e38ec0c9c56c59f392a2af7ab91e775e9d089ed17dc03a39b31fb42bffd6"} Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.504073 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.507607 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e9760499-8609-4691-b587-2265122f7af7","Type":"ContainerStarted","Data":"a0cb6e0b01b35b2e35dddb1ccbacf44b305d78e4b8d01868beedc8c5d64e3ed2"} Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.514559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49a4d97b-c448-4f1a-b898-ebff4a739deb","Type":"ContainerDied","Data":"07976d188bca7d9119aadf3a2aa85616c4aea3d2dd7f038f7f04915336d28467"} Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.514607 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.514636 4770 scope.go:117] "RemoveContainer" containerID="65529df6bef3a771240166596e5d55ce138b19388afa0f98651c3e7f72572ebb" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.534840 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.534818386 podStartE2EDuration="2.534818386s" podCreationTimestamp="2026-01-26 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:01.524925224 +0000 UTC m=+1326.089831986" watchObservedRunningTime="2026-01-26 19:04:01.534818386 +0000 UTC m=+1326.099725138" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.553450 4770 scope.go:117] "RemoveContainer" containerID="f95b40d5ab0abb9c7613558157e934151d43e593ac211029126db6413b4100f6" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.562267 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.562249039 podStartE2EDuration="2.562249039s" podCreationTimestamp="2026-01-26 19:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:01.561840878 +0000 UTC m=+1326.126747620" watchObservedRunningTime="2026-01-26 19:04:01.562249039 +0000 UTC m=+1326.127155771" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.594754 4770 scope.go:117] "RemoveContainer" containerID="05f019326102235f6b7b6c8240100c5e8aa5d87e0cb41c12216c8d152d5ef4a1" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.603587 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.619091 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.635044 4770 scope.go:117] "RemoveContainer" containerID="07072567434bd1cdd1878855cb0f01e59056dce5f28eaef82384cee25520a70b" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.636994 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:01 crc kubenswrapper[4770]: E0126 19:04:01.637455 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-notification-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637479 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-notification-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: E0126 19:04:01.637506 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-central-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637516 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-central-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: E0126 19:04:01.637540 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="proxy-httpd" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637548 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="proxy-httpd" Jan 26 19:04:01 crc kubenswrapper[4770]: E0126 19:04:01.637562 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="sg-core" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637570 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="sg-core" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637805 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="sg-core" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637832 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="proxy-httpd" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637850 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-central-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.637867 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" containerName="ceilometer-notification-agent" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.639852 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.643744 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.644255 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.650217 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.774890 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.774938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.775090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.775135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.775269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfbd\" (UniqueName: \"kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.775341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.775597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.781040 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a4d97b-c448-4f1a-b898-ebff4a739deb" path="/var/lib/kubelet/pods/49a4d97b-c448-4f1a-b898-ebff4a739deb/volumes" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.876946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877005 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877042 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfbd\" (UniqueName: \"kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877145 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877158 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.877777 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.882215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.889979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.892043 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.892532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.893717 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfbd\" (UniqueName: \"kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd\") pod \"ceilometer-0\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " pod="openstack/ceilometer-0" Jan 26 19:04:01 crc kubenswrapper[4770]: I0126 19:04:01.958954 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:02 crc kubenswrapper[4770]: W0126 19:04:02.467736 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df1c475_3b6e_4efb_bc53_f2b85cbf71a3.slice/crio-fb53de670cc50bfde42cf7fc92dc0deec63919492538450275ff9a14492135d6 WatchSource:0}: Error finding container fb53de670cc50bfde42cf7fc92dc0deec63919492538450275ff9a14492135d6: Status 404 returned error can't find the container with id fb53de670cc50bfde42cf7fc92dc0deec63919492538450275ff9a14492135d6 Jan 26 19:04:02 crc kubenswrapper[4770]: I0126 19:04:02.470191 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:02 crc kubenswrapper[4770]: I0126 19:04:02.528530 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerStarted","Data":"fb53de670cc50bfde42cf7fc92dc0deec63919492538450275ff9a14492135d6"} Jan 26 19:04:03 crc kubenswrapper[4770]: I0126 19:04:03.538943 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerStarted","Data":"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563"} Jan 26 19:04:03 crc kubenswrapper[4770]: I0126 19:04:03.539265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerStarted","Data":"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9"} Jan 26 19:04:03 crc kubenswrapper[4770]: I0126 19:04:03.539279 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerStarted","Data":"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36"} Jan 26 19:04:05 crc kubenswrapper[4770]: I0126 19:04:05.561298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerStarted","Data":"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce"} Jan 26 19:04:05 crc kubenswrapper[4770]: I0126 19:04:05.562019 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:04:05 crc kubenswrapper[4770]: I0126 19:04:05.595600 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112005575 podStartE2EDuration="4.595577673s" podCreationTimestamp="2026-01-26 19:04:01 +0000 UTC" firstStartedPulling="2026-01-26 19:04:02.469571254 +0000 UTC m=+1327.034477986" lastFinishedPulling="2026-01-26 19:04:04.953143322 +0000 UTC m=+1329.518050084" observedRunningTime="2026-01-26 19:04:05.581971229 +0000 UTC m=+1330.146878001" watchObservedRunningTime="2026-01-26 19:04:05.595577673 +0000 UTC m=+1330.160484415" Jan 26 19:04:09 crc kubenswrapper[4770]: I0126 19:04:09.816664 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 19:04:09 crc kubenswrapper[4770]: I0126 19:04:09.881942 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 26 19:04:09 crc kubenswrapper[4770]: I0126 19:04:09.910142 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.330234 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rcvbt"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.331764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.334131 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.334454 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.344894 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rcvbt"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.461291 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.461524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.461746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.461825 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscpd\" (UniqueName: \"kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.565723 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscpd\" (UniqueName: \"kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.565793 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.565848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.565954 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.568805 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.570895 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.573836 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.577527 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.583118 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.593918 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.595437 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.595471 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.609764 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.617727 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.618499 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscpd\" (UniqueName: \"kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd\") pod \"nova-cell0-cell-mapping-rcvbt\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.633106 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.655328 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670023 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtj4\" (UniqueName: \"kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670173 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xzs\" (UniqueName: \"kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670221 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670257 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.670340 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.679076 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.744276 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.763724 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.773774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.773867 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtj4\" (UniqueName: \"kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.773916 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.773946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xzs\" (UniqueName: \"kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.774002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.774031 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.774066 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.776770 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.782055 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.786455 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.795267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.804380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.804952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.811714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.823277 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.848612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xzs\" (UniqueName: \"kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs\") pod \"nova-api-0\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " pod="openstack/nova-api-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.853229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtj4\" (UniqueName: \"kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4\") pod \"nova-scheduler-0\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.855850 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.858930 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.877487 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.880252 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.880376 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.880550 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.880576 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7sp4\" (UniqueName: \"kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.881907 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.893456 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.918352 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.935063 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982635 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982797 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982836 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgfw\" (UniqueName: \"kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982898 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982925 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7sp4\" (UniqueName: \"kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.982983 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj2g\" (UniqueName: \"kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.983006 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.983025 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.983043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.983942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.988895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:10 crc kubenswrapper[4770]: I0126 19:04:10.991864 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.009104 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7sp4\" (UniqueName: \"kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4\") pod \"nova-metadata-0\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " pod="openstack/nova-metadata-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.020007 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.038350 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.085038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.085085 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.086202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.086303 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.086414 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkgfw\" (UniqueName: \"kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.086438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.086505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.087246 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.087563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.088128 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.088877 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.088949 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.089616 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.089684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj2g\" (UniqueName: \"kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.090061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.104387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.109979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkgfw\" (UniqueName: \"kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.111323 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj2g\" (UniqueName: \"kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g\") pod \"dnsmasq-dns-57b884f959-6g8pb\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.221142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.233819 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.251906 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.461434 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rcvbt"] Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.642821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rcvbt" event={"ID":"0cf8c60c-ba0f-4c3e-8df1-8323360857b5","Type":"ContainerStarted","Data":"96c46792abad9cd60625d395023dc37d9f978af8bbc540083522fbd60730eefc"} Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.688762 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.803274 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.904490 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:11 crc kubenswrapper[4770]: W0126 19:04:11.925246 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06b64549_9254_46c5_ab8f_e86a4e351671.slice/crio-301e7ad6a1b058db8580ac80578b305964121ef38a7d95ee52990ab181afdf9f WatchSource:0}: Error finding container 301e7ad6a1b058db8580ac80578b305964121ef38a7d95ee52990ab181afdf9f: Status 404 returned error can't find the container with id 301e7ad6a1b058db8580ac80578b305964121ef38a7d95ee52990ab181afdf9f Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.943039 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.965809 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-69wnj"] Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.968809 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.972400 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 19:04:11 crc kubenswrapper[4770]: I0126 19:04:11.979083 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.004116 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-69wnj"] Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.016760 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.016930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhk7\" (UniqueName: \"kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.017180 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.017276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.033501 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.119177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhk7\" (UniqueName: \"kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.119667 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.119758 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.119824 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.126720 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.127163 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.127646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.143900 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhk7\" (UniqueName: \"kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7\") pod \"nova-cell1-conductor-db-sync-69wnj\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.291647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.653773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerStarted","Data":"301e7ad6a1b058db8580ac80578b305964121ef38a7d95ee52990ab181afdf9f"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.655419 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rcvbt" event={"ID":"0cf8c60c-ba0f-4c3e-8df1-8323360857b5","Type":"ContainerStarted","Data":"b0722d76b8af30b4179059dd64b413d06163f7e4f5e20eedde53dce53362e5a0"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.656946 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerStarted","Data":"4469d51cb5f2662b9933d5afb9123be8144fa379638ff520dcfab7ce5e39a06f"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.659542 4770 generic.go:334] "Generic (PLEG): container finished" podID="5388854f-bcd6-460e-be84-c329e053d5ae" containerID="e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9" exitCode=0 Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.659617 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" event={"ID":"5388854f-bcd6-460e-be84-c329e053d5ae","Type":"ContainerDied","Data":"e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.659657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" event={"ID":"5388854f-bcd6-460e-be84-c329e053d5ae","Type":"ContainerStarted","Data":"8729a8461795621aea09f56b67afc7b1558cda9df105e9f9a52e7ac4ba3d2049"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.662350 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ac238c7-92d4-46e8-8845-f77b39a3b141","Type":"ContainerStarted","Data":"d8c6af586c0f0290789369380f19a964eadd2863d1cdab30cfab28222bf27b35"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.663808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1e4248b-3c62-4777-8eeb-07cae864a0bc","Type":"ContainerStarted","Data":"e5c2d9db8c8ceff2dce3624e358f036a2fcc00ac68e356032a36a8e9b1331de7"} Jan 26 19:04:12 crc kubenswrapper[4770]: I0126 19:04:12.670578 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rcvbt" podStartSLOduration=2.670559389 podStartE2EDuration="2.670559389s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:12.668257796 +0000 UTC m=+1337.233164528" watchObservedRunningTime="2026-01-26 19:04:12.670559389 +0000 UTC m=+1337.235466121" Jan 26 19:04:14 crc kubenswrapper[4770]: I0126 19:04:14.205723 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:14 crc kubenswrapper[4770]: I0126 19:04:14.222531 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.699909 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1e4248b-3c62-4777-8eeb-07cae864a0bc","Type":"ContainerStarted","Data":"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da"} Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.700020 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da" gracePeriod=30 Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.702936 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerStarted","Data":"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94"} Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.704797 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerStarted","Data":"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f"} Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.707301 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" event={"ID":"5388854f-bcd6-460e-be84-c329e053d5ae","Type":"ContainerStarted","Data":"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd"} Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.707514 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.709802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ac238c7-92d4-46e8-8845-f77b39a3b141","Type":"ContainerStarted","Data":"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988"} Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.748511 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.573645338 podStartE2EDuration="5.748467007s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="2026-01-26 19:04:12.046502483 +0000 UTC m=+1336.611409215" lastFinishedPulling="2026-01-26 19:04:15.221324142 +0000 UTC m=+1339.786230884" observedRunningTime="2026-01-26 19:04:15.722902945 +0000 UTC m=+1340.287809677" watchObservedRunningTime="2026-01-26 19:04:15.748467007 +0000 UTC m=+1340.313373739" Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.761361 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.254432552 podStartE2EDuration="5.76133928s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="2026-01-26 19:04:11.701060897 +0000 UTC m=+1336.265967629" lastFinishedPulling="2026-01-26 19:04:15.207967625 +0000 UTC m=+1339.772874357" observedRunningTime="2026-01-26 19:04:15.736977372 +0000 UTC m=+1340.301884104" watchObservedRunningTime="2026-01-26 19:04:15.76133928 +0000 UTC m=+1340.326246012" Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.776366 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" podStartSLOduration=5.776345963 podStartE2EDuration="5.776345963s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:15.755765937 +0000 UTC m=+1340.320672689" watchObservedRunningTime="2026-01-26 19:04:15.776345963 +0000 UTC m=+1340.341252685" Jan 26 19:04:15 crc kubenswrapper[4770]: I0126 19:04:15.794350 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-69wnj"] Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.040164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.253103 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.721200 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerStarted","Data":"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f"} Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.721357 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-log" containerID="cri-o://54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" gracePeriod=30 Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.721410 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-metadata" containerID="cri-o://fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" gracePeriod=30 Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.723906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-69wnj" event={"ID":"d07378f5-6d68-438a-8bd0-01b033da7b25","Type":"ContainerStarted","Data":"979e8eedb42cf5c4b771d1bb67c11ee682fba3e594bc07d121a304427ee27269"} Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.724071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-69wnj" event={"ID":"d07378f5-6d68-438a-8bd0-01b033da7b25","Type":"ContainerStarted","Data":"35c6719efb10bc8b24e74ba15cb7aff6db057e2885c4825fad63e0f485321d16"} Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.733627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerStarted","Data":"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924"} Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.750452 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.461117899 podStartE2EDuration="6.750431721s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="2026-01-26 19:04:11.933247533 +0000 UTC m=+1336.498154265" lastFinishedPulling="2026-01-26 19:04:15.222561345 +0000 UTC m=+1339.787468087" observedRunningTime="2026-01-26 19:04:16.743192921 +0000 UTC m=+1341.308099653" watchObservedRunningTime="2026-01-26 19:04:16.750431721 +0000 UTC m=+1341.315338453" Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.782810 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.379771784 podStartE2EDuration="6.782605394s" podCreationTimestamp="2026-01-26 19:04:10 +0000 UTC" firstStartedPulling="2026-01-26 19:04:11.819460028 +0000 UTC m=+1336.384366760" lastFinishedPulling="2026-01-26 19:04:15.222293638 +0000 UTC m=+1339.787200370" observedRunningTime="2026-01-26 19:04:16.771923851 +0000 UTC m=+1341.336830593" watchObservedRunningTime="2026-01-26 19:04:16.782605394 +0000 UTC m=+1341.347512136" Jan 26 19:04:16 crc kubenswrapper[4770]: I0126 19:04:16.804028 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-69wnj" podStartSLOduration=5.8040059920000004 podStartE2EDuration="5.804005992s" podCreationTimestamp="2026-01-26 19:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:16.789456643 +0000 UTC m=+1341.354363375" watchObservedRunningTime="2026-01-26 19:04:16.804005992 +0000 UTC m=+1341.368912724" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.320016 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.435354 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data\") pod \"06b64549-9254-46c5-ab8f-e86a4e351671\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.435441 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs\") pod \"06b64549-9254-46c5-ab8f-e86a4e351671\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.435481 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle\") pod \"06b64549-9254-46c5-ab8f-e86a4e351671\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.435557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7sp4\" (UniqueName: \"kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4\") pod \"06b64549-9254-46c5-ab8f-e86a4e351671\" (UID: \"06b64549-9254-46c5-ab8f-e86a4e351671\") " Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.436378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs" (OuterVolumeSpecName: "logs") pod "06b64549-9254-46c5-ab8f-e86a4e351671" (UID: "06b64549-9254-46c5-ab8f-e86a4e351671"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.440851 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4" (OuterVolumeSpecName: "kube-api-access-h7sp4") pod "06b64549-9254-46c5-ab8f-e86a4e351671" (UID: "06b64549-9254-46c5-ab8f-e86a4e351671"). InnerVolumeSpecName "kube-api-access-h7sp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.463601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06b64549-9254-46c5-ab8f-e86a4e351671" (UID: "06b64549-9254-46c5-ab8f-e86a4e351671"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.477802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data" (OuterVolumeSpecName: "config-data") pod "06b64549-9254-46c5-ab8f-e86a4e351671" (UID: "06b64549-9254-46c5-ab8f-e86a4e351671"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.537714 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.537761 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b64549-9254-46c5-ab8f-e86a4e351671-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.537770 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b64549-9254-46c5-ab8f-e86a4e351671-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.537783 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7sp4\" (UniqueName: \"kubernetes.io/projected/06b64549-9254-46c5-ab8f-e86a4e351671-kube-api-access-h7sp4\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747095 4770 generic.go:334] "Generic (PLEG): container finished" podID="06b64549-9254-46c5-ab8f-e86a4e351671" containerID="fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" exitCode=0 Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747132 4770 generic.go:334] "Generic (PLEG): container finished" podID="06b64549-9254-46c5-ab8f-e86a4e351671" containerID="54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" exitCode=143 Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747156 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerDied","Data":"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f"} Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747260 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerDied","Data":"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94"} Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747281 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"06b64549-9254-46c5-ab8f-e86a4e351671","Type":"ContainerDied","Data":"301e7ad6a1b058db8580ac80578b305964121ef38a7d95ee52990ab181afdf9f"} Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.747291 4770 scope.go:117] "RemoveContainer" containerID="fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.748081 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.792008 4770 scope.go:117] "RemoveContainer" containerID="54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.817641 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.826882 4770 scope.go:117] "RemoveContainer" containerID="fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.831202 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:17 crc kubenswrapper[4770]: E0126 19:04:17.833167 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f\": container with ID starting with fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f not found: ID does not exist" containerID="fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.833214 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f"} err="failed to get container status \"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f\": rpc error: code = NotFound desc = could not find container \"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f\": container with ID starting with fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f not found: ID does not exist" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.833245 4770 scope.go:117] "RemoveContainer" containerID="54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" Jan 26 19:04:17 crc kubenswrapper[4770]: E0126 19:04:17.836829 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94\": container with ID starting with 54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94 not found: ID does not exist" containerID="54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.836873 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94"} err="failed to get container status \"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94\": rpc error: code = NotFound desc = could not find container \"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94\": container with ID starting with 54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94 not found: ID does not exist" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.836905 4770 scope.go:117] "RemoveContainer" containerID="fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.841065 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f"} err="failed to get container status \"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f\": rpc error: code = NotFound desc = could not find container \"fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f\": container with ID starting with fb50d441676d65efbe2837740fc4b5d2a04eb8b60a3792d7932a7444309ae29f not found: ID does not exist" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.841101 4770 scope.go:117] "RemoveContainer" containerID="54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.841395 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94"} err="failed to get container status \"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94\": rpc error: code = NotFound desc = could not find container \"54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94\": container with ID starting with 54092bf478fa3c4601e366aaa4c028e06df37e0cce637b47698e05f18f3c7b94 not found: ID does not exist" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.865823 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:17 crc kubenswrapper[4770]: E0126 19:04:17.866811 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-log" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.866837 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-log" Jan 26 19:04:17 crc kubenswrapper[4770]: E0126 19:04:17.866862 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-metadata" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.866871 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-metadata" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.867122 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-log" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.867164 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" containerName="nova-metadata-metadata" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.868647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.871334 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.871530 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.883034 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.945566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.945854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd74\" (UniqueName: \"kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.945957 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.946068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:17 crc kubenswrapper[4770]: I0126 19:04:17.946574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.048918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.050085 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.050352 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.050507 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gd74\" (UniqueName: \"kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.050657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.050686 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.054318 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.054895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.058193 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.071667 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gd74\" (UniqueName: \"kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74\") pod \"nova-metadata-0\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.193627 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.664771 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:18 crc kubenswrapper[4770]: W0126 19:04:18.671159 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6ee7ecb_a1b1_40ba_88b0_e1608aeb06d3.slice/crio-38ecf4b6fe52a6ba39f7e80f6b09e6121f8be6a60ef8c1e5d0077f79f782733e WatchSource:0}: Error finding container 38ecf4b6fe52a6ba39f7e80f6b09e6121f8be6a60ef8c1e5d0077f79f782733e: Status 404 returned error can't find the container with id 38ecf4b6fe52a6ba39f7e80f6b09e6121f8be6a60ef8c1e5d0077f79f782733e Jan 26 19:04:18 crc kubenswrapper[4770]: I0126 19:04:18.761557 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerStarted","Data":"38ecf4b6fe52a6ba39f7e80f6b09e6121f8be6a60ef8c1e5d0077f79f782733e"} Jan 26 19:04:19 crc kubenswrapper[4770]: I0126 19:04:19.793870 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b64549-9254-46c5-ab8f-e86a4e351671" path="/var/lib/kubelet/pods/06b64549-9254-46c5-ab8f-e86a4e351671/volumes" Jan 26 19:04:19 crc kubenswrapper[4770]: I0126 19:04:19.796404 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerStarted","Data":"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab"} Jan 26 19:04:19 crc kubenswrapper[4770]: I0126 19:04:19.796454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerStarted","Data":"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb"} Jan 26 19:04:19 crc kubenswrapper[4770]: I0126 19:04:19.816322 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.816300358 podStartE2EDuration="2.816300358s" podCreationTimestamp="2026-01-26 19:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:19.812145904 +0000 UTC m=+1344.377052646" watchObservedRunningTime="2026-01-26 19:04:19.816300358 +0000 UTC m=+1344.381207090" Jan 26 19:04:20 crc kubenswrapper[4770]: I0126 19:04:20.811796 4770 generic.go:334] "Generic (PLEG): container finished" podID="0cf8c60c-ba0f-4c3e-8df1-8323360857b5" containerID="b0722d76b8af30b4179059dd64b413d06163f7e4f5e20eedde53dce53362e5a0" exitCode=0 Jan 26 19:04:20 crc kubenswrapper[4770]: I0126 19:04:20.811950 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rcvbt" event={"ID":"0cf8c60c-ba0f-4c3e-8df1-8323360857b5","Type":"ContainerDied","Data":"b0722d76b8af30b4179059dd64b413d06163f7e4f5e20eedde53dce53362e5a0"} Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.023402 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.023474 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.039178 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.082328 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.235928 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.325432 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.325659 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="dnsmasq-dns" containerID="cri-o://04d964d355e8e8ea6edd62ea5207b51e10c5fcb29494bcd4a887077bec5b6f38" gracePeriod=10 Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.838937 4770 generic.go:334] "Generic (PLEG): container finished" podID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerID="04d964d355e8e8ea6edd62ea5207b51e10c5fcb29494bcd4a887077bec5b6f38" exitCode=0 Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.839083 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerDied","Data":"04d964d355e8e8ea6edd62ea5207b51e10c5fcb29494bcd4a887077bec5b6f38"} Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.839310 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" event={"ID":"0b5878d0-6bfa-43b8-8382-7f0c503f7b24","Type":"ContainerDied","Data":"f6ce24682bed44dc5ed33747338b07e37aadb75a854e1507221e0e0a6b21305c"} Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.839329 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ce24682bed44dc5ed33747338b07e37aadb75a854e1507221e0e0a6b21305c" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.878099 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.889260 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939236 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939326 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939360 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7mz\" (UniqueName: \"kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939444 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939479 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.939528 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb\") pod \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\" (UID: \"0b5878d0-6bfa-43b8-8382-7f0c503f7b24\") " Jan 26 19:04:21 crc kubenswrapper[4770]: I0126 19:04:21.953170 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz" (OuterVolumeSpecName: "kube-api-access-8x7mz") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "kube-api-access-8x7mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.006308 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.021124 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config" (OuterVolumeSpecName: "config") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.024726 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.033082 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.035152 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b5878d0-6bfa-43b8-8382-7f0c503f7b24" (UID: "0b5878d0-6bfa-43b8-8382-7f0c503f7b24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.041959 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.041993 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.042007 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.042020 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.042033 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.042045 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7mz\" (UniqueName: \"kubernetes.io/projected/0b5878d0-6bfa-43b8-8382-7f0c503f7b24-kube-api-access-8x7mz\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.108020 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.108086 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.176860 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.245303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data\") pod \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.245586 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle\") pod \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.245641 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pscpd\" (UniqueName: \"kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd\") pod \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.245685 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts\") pod \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\" (UID: \"0cf8c60c-ba0f-4c3e-8df1-8323360857b5\") " Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.271291 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts" (OuterVolumeSpecName: "scripts") pod "0cf8c60c-ba0f-4c3e-8df1-8323360857b5" (UID: "0cf8c60c-ba0f-4c3e-8df1-8323360857b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.287894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd" (OuterVolumeSpecName: "kube-api-access-pscpd") pod "0cf8c60c-ba0f-4c3e-8df1-8323360857b5" (UID: "0cf8c60c-ba0f-4c3e-8df1-8323360857b5"). InnerVolumeSpecName "kube-api-access-pscpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.312797 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data" (OuterVolumeSpecName: "config-data") pod "0cf8c60c-ba0f-4c3e-8df1-8323360857b5" (UID: "0cf8c60c-ba0f-4c3e-8df1-8323360857b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.348016 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pscpd\" (UniqueName: \"kubernetes.io/projected/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-kube-api-access-pscpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.348051 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.348061 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.351915 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cf8c60c-ba0f-4c3e-8df1-8323360857b5" (UID: "0cf8c60c-ba0f-4c3e-8df1-8323360857b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.450249 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf8c60c-ba0f-4c3e-8df1-8323360857b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.860901 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c7cd669f-f6xsz" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.860927 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rcvbt" event={"ID":"0cf8c60c-ba0f-4c3e-8df1-8323360857b5","Type":"ContainerDied","Data":"96c46792abad9cd60625d395023dc37d9f978af8bbc540083522fbd60730eefc"} Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.860971 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c46792abad9cd60625d395023dc37d9f978af8bbc540083522fbd60730eefc" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.860978 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rcvbt" Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.917056 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:04:22 crc kubenswrapper[4770]: I0126 19:04:22.926538 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c7cd669f-f6xsz"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.023560 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.023934 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-log" containerID="cri-o://9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f" gracePeriod=30 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.024221 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-api" containerID="cri-o://e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924" gracePeriod=30 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.042622 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.111730 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.112243 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-log" containerID="cri-o://87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" gracePeriod=30 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.112578 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-metadata" containerID="cri-o://a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" gracePeriod=30 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.194872 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.194928 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.659002 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.673604 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs\") pod \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.673747 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data\") pod \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.673803 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gd74\" (UniqueName: \"kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74\") pod \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.673839 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs\") pod \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.673957 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle\") pod \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\" (UID: \"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3\") " Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.675273 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs" (OuterVolumeSpecName: "logs") pod "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" (UID: "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.685876 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74" (OuterVolumeSpecName: "kube-api-access-2gd74") pod "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" (UID: "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3"). InnerVolumeSpecName "kube-api-access-2gd74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.717385 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data" (OuterVolumeSpecName: "config-data") pod "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" (UID: "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.718518 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" (UID: "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.762623 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" (UID: "a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.776422 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.776461 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.776474 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.776486 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gd74\" (UniqueName: \"kubernetes.io/projected/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-kube-api-access-2gd74\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.776499 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.780051 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" path="/var/lib/kubelet/pods/0b5878d0-6bfa-43b8-8382-7f0c503f7b24/volumes" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.874282 4770 generic.go:334] "Generic (PLEG): container finished" podID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerID="9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f" exitCode=143 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.874354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerDied","Data":"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f"} Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.876357 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerID="a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" exitCode=0 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.876386 4770 generic.go:334] "Generic (PLEG): container finished" podID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerID="87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" exitCode=143 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.877915 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.878131 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerName="nova-scheduler-scheduler" containerID="cri-o://a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" gracePeriod=30 Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.878501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerDied","Data":"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab"} Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.878634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerDied","Data":"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb"} Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.878651 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3","Type":"ContainerDied","Data":"38ecf4b6fe52a6ba39f7e80f6b09e6121f8be6a60ef8c1e5d0077f79f782733e"} Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.878671 4770 scope.go:117] "RemoveContainer" containerID="a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.906768 4770 scope.go:117] "RemoveContainer" containerID="87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.926068 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.933982 4770 scope.go:117] "RemoveContainer" containerID="a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.934966 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab\": container with ID starting with a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab not found: ID does not exist" containerID="a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.935010 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab"} err="failed to get container status \"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab\": rpc error: code = NotFound desc = could not find container \"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab\": container with ID starting with a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab not found: ID does not exist" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.935038 4770 scope.go:117] "RemoveContainer" containerID="87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.937306 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb\": container with ID starting with 87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb not found: ID does not exist" containerID="87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.937357 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb"} err="failed to get container status \"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb\": rpc error: code = NotFound desc = could not find container \"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb\": container with ID starting with 87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb not found: ID does not exist" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.937388 4770 scope.go:117] "RemoveContainer" containerID="a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.937756 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab"} err="failed to get container status \"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab\": rpc error: code = NotFound desc = could not find container \"a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab\": container with ID starting with a585171d22443a55cd01165e917596715ca01d5a5329f235fff61a4df5cb1dab not found: ID does not exist" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.937781 4770 scope.go:117] "RemoveContainer" containerID="87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.938110 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb"} err="failed to get container status \"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb\": rpc error: code = NotFound desc = could not find container \"87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb\": container with ID starting with 87a273da052e657ff213232e4c877224ae404a1892edae7f6112d7fac6a552bb not found: ID does not exist" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.940098 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.955738 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.956260 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-log" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956300 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-log" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.956315 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="dnsmasq-dns" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956324 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="dnsmasq-dns" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.956342 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="init" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956351 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="init" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.956375 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-metadata" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956384 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-metadata" Jan 26 19:04:23 crc kubenswrapper[4770]: E0126 19:04:23.956405 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf8c60c-ba0f-4c3e-8df1-8323360857b5" containerName="nova-manage" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956412 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf8c60c-ba0f-4c3e-8df1-8323360857b5" containerName="nova-manage" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956643 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-log" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956664 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" containerName="nova-metadata-metadata" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956683 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf8c60c-ba0f-4c3e-8df1-8323360857b5" containerName="nova-manage" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.956769 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5878d0-6bfa-43b8-8382-7f0c503f7b24" containerName="dnsmasq-dns" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.958070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.963800 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.964054 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.968393 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.980961 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.981039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk77g\" (UniqueName: \"kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.981082 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.981145 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:23 crc kubenswrapper[4770]: I0126 19:04:23.981205 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.083674 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.083992 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.084110 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.084178 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk77g\" (UniqueName: \"kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.084225 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.084934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.088641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.089260 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.090825 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.103580 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk77g\" (UniqueName: \"kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g\") pod \"nova-metadata-0\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.290279 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.758348 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.884791 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerStarted","Data":"f20e54b53ec64a025edd3fd213bf8b5ad2d5d7b18a499f7a0df0ad940f56a05c"} Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.887750 4770 generic.go:334] "Generic (PLEG): container finished" podID="d07378f5-6d68-438a-8bd0-01b033da7b25" containerID="979e8eedb42cf5c4b771d1bb67c11ee682fba3e594bc07d121a304427ee27269" exitCode=0 Jan 26 19:04:24 crc kubenswrapper[4770]: I0126 19:04:24.887827 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-69wnj" event={"ID":"d07378f5-6d68-438a-8bd0-01b033da7b25","Type":"ContainerDied","Data":"979e8eedb42cf5c4b771d1bb67c11ee682fba3e594bc07d121a304427ee27269"} Jan 26 19:04:25 crc kubenswrapper[4770]: I0126 19:04:25.780652 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3" path="/var/lib/kubelet/pods/a6ee7ecb-a1b1-40ba-88b0-e1608aeb06d3/volumes" Jan 26 19:04:25 crc kubenswrapper[4770]: I0126 19:04:25.902023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerStarted","Data":"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773"} Jan 26 19:04:25 crc kubenswrapper[4770]: I0126 19:04:25.902481 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerStarted","Data":"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a"} Jan 26 19:04:25 crc kubenswrapper[4770]: I0126 19:04:25.964308 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.964274509 podStartE2EDuration="2.964274509s" podCreationTimestamp="2026-01-26 19:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:25.959070117 +0000 UTC m=+1350.523976859" watchObservedRunningTime="2026-01-26 19:04:25.964274509 +0000 UTC m=+1350.529181251" Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.042180 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.051312 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.053514 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.053590 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerName="nova-scheduler-scheduler" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.356141 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.440710 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts\") pod \"d07378f5-6d68-438a-8bd0-01b033da7b25\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.440835 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data\") pod \"d07378f5-6d68-438a-8bd0-01b033da7b25\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.440863 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnhk7\" (UniqueName: \"kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7\") pod \"d07378f5-6d68-438a-8bd0-01b033da7b25\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.440885 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle\") pod \"d07378f5-6d68-438a-8bd0-01b033da7b25\" (UID: \"d07378f5-6d68-438a-8bd0-01b033da7b25\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.447868 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7" (OuterVolumeSpecName: "kube-api-access-hnhk7") pod "d07378f5-6d68-438a-8bd0-01b033da7b25" (UID: "d07378f5-6d68-438a-8bd0-01b033da7b25"). InnerVolumeSpecName "kube-api-access-hnhk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.447967 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts" (OuterVolumeSpecName: "scripts") pod "d07378f5-6d68-438a-8bd0-01b033da7b25" (UID: "d07378f5-6d68-438a-8bd0-01b033da7b25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.475372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data" (OuterVolumeSpecName: "config-data") pod "d07378f5-6d68-438a-8bd0-01b033da7b25" (UID: "d07378f5-6d68-438a-8bd0-01b033da7b25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.482423 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d07378f5-6d68-438a-8bd0-01b033da7b25" (UID: "d07378f5-6d68-438a-8bd0-01b033da7b25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.543033 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.543177 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.543265 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnhk7\" (UniqueName: \"kubernetes.io/projected/d07378f5-6d68-438a-8bd0-01b033da7b25-kube-api-access-hnhk7\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.543336 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07378f5-6d68-438a-8bd0-01b033da7b25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.606887 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.644996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data\") pod \"0ad9ec69-750c-4908-a389-0a95f787f5f9\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.645061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle\") pod \"0ad9ec69-750c-4908-a389-0a95f787f5f9\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.645125 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs\") pod \"0ad9ec69-750c-4908-a389-0a95f787f5f9\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.645238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5xzs\" (UniqueName: \"kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs\") pod \"0ad9ec69-750c-4908-a389-0a95f787f5f9\" (UID: \"0ad9ec69-750c-4908-a389-0a95f787f5f9\") " Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.645859 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs" (OuterVolumeSpecName: "logs") pod "0ad9ec69-750c-4908-a389-0a95f787f5f9" (UID: "0ad9ec69-750c-4908-a389-0a95f787f5f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.654745 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs" (OuterVolumeSpecName: "kube-api-access-k5xzs") pod "0ad9ec69-750c-4908-a389-0a95f787f5f9" (UID: "0ad9ec69-750c-4908-a389-0a95f787f5f9"). InnerVolumeSpecName "kube-api-access-k5xzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.670825 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data" (OuterVolumeSpecName: "config-data") pod "0ad9ec69-750c-4908-a389-0a95f787f5f9" (UID: "0ad9ec69-750c-4908-a389-0a95f787f5f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.677303 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ad9ec69-750c-4908-a389-0a95f787f5f9" (UID: "0ad9ec69-750c-4908-a389-0a95f787f5f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.748207 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.748402 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad9ec69-750c-4908-a389-0a95f787f5f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.748526 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad9ec69-750c-4908-a389-0a95f787f5f9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.748593 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5xzs\" (UniqueName: \"kubernetes.io/projected/0ad9ec69-750c-4908-a389-0a95f787f5f9-kube-api-access-k5xzs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.911234 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-69wnj" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.911245 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-69wnj" event={"ID":"d07378f5-6d68-438a-8bd0-01b033da7b25","Type":"ContainerDied","Data":"35c6719efb10bc8b24e74ba15cb7aff6db057e2885c4825fad63e0f485321d16"} Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.911668 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c6719efb10bc8b24e74ba15cb7aff6db057e2885c4825fad63e0f485321d16" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.912844 4770 generic.go:334] "Generic (PLEG): container finished" podID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerID="e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924" exitCode=0 Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.912991 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerDied","Data":"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924"} Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.913031 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0ad9ec69-750c-4908-a389-0a95f787f5f9","Type":"ContainerDied","Data":"4469d51cb5f2662b9933d5afb9123be8144fa379638ff520dcfab7ce5e39a06f"} Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.913056 4770 scope.go:117] "RemoveContainer" containerID="e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.914535 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.935721 4770 scope.go:117] "RemoveContainer" containerID="9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.975096 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.986946 4770 scope.go:117] "RemoveContainer" containerID="e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924" Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.988579 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924\": container with ID starting with e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924 not found: ID does not exist" containerID="e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.988631 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924"} err="failed to get container status \"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924\": rpc error: code = NotFound desc = could not find container \"e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924\": container with ID starting with e9370228d042fb3503bdbf2bd2819298f7b33a1ee77ff6ee05ac8ec64f1cf924 not found: ID does not exist" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.988688 4770 scope.go:117] "RemoveContainer" containerID="9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f" Jan 26 19:04:26 crc kubenswrapper[4770]: E0126 19:04:26.992164 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f\": container with ID starting with 9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f not found: ID does not exist" containerID="9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.992214 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f"} err="failed to get container status \"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f\": rpc error: code = NotFound desc = could not find container \"9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f\": container with ID starting with 9911cf8f0d3939ce0d26e7ebabfb033e037318aca7cce815e8ddf4bf4a97d23f not found: ID does not exist" Jan 26 19:04:26 crc kubenswrapper[4770]: I0126 19:04:26.992260 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.004909 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: E0126 19:04:27.005424 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-log" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005452 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-log" Jan 26 19:04:27 crc kubenswrapper[4770]: E0126 19:04:27.005476 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07378f5-6d68-438a-8bd0-01b033da7b25" containerName="nova-cell1-conductor-db-sync" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005487 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07378f5-6d68-438a-8bd0-01b033da7b25" containerName="nova-cell1-conductor-db-sync" Jan 26 19:04:27 crc kubenswrapper[4770]: E0126 19:04:27.005509 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-api" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005518 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-api" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005801 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07378f5-6d68-438a-8bd0-01b033da7b25" containerName="nova-cell1-conductor-db-sync" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005841 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-log" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.005856 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" containerName="nova-api-api" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.007242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.011368 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.016847 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.018969 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.020889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.024842 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.033540 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054365 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054391 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtjw\" (UniqueName: \"kubernetes.io/projected/6a5bd373-f3aa-42ca-8360-32e1de10c999-kube-api-access-vgtjw\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054462 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtfn\" (UniqueName: \"kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054496 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054539 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.054556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155580 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtjw\" (UniqueName: \"kubernetes.io/projected/6a5bd373-f3aa-42ca-8360-32e1de10c999-kube-api-access-vgtjw\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155747 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtfn\" (UniqueName: \"kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155829 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.155862 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.157495 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.160793 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.160876 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.169673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5bd373-f3aa-42ca-8360-32e1de10c999-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.172825 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.176801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtjw\" (UniqueName: \"kubernetes.io/projected/6a5bd373-f3aa-42ca-8360-32e1de10c999-kube-api-access-vgtjw\") pod \"nova-cell1-conductor-0\" (UID: \"6a5bd373-f3aa-42ca-8360-32e1de10c999\") " pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.184351 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtfn\" (UniqueName: \"kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn\") pod \"nova-api-0\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.350613 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.361260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.714395 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.777154 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data\") pod \"5ac238c7-92d4-46e8-8845-f77b39a3b141\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.777205 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle\") pod \"5ac238c7-92d4-46e8-8845-f77b39a3b141\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.777660 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhtj4\" (UniqueName: \"kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4\") pod \"5ac238c7-92d4-46e8-8845-f77b39a3b141\" (UID: \"5ac238c7-92d4-46e8-8845-f77b39a3b141\") " Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.777729 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad9ec69-750c-4908-a389-0a95f787f5f9" path="/var/lib/kubelet/pods/0ad9ec69-750c-4908-a389-0a95f787f5f9/volumes" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.784431 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4" (OuterVolumeSpecName: "kube-api-access-qhtj4") pod "5ac238c7-92d4-46e8-8845-f77b39a3b141" (UID: "5ac238c7-92d4-46e8-8845-f77b39a3b141"). InnerVolumeSpecName "kube-api-access-qhtj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.804088 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data" (OuterVolumeSpecName: "config-data") pod "5ac238c7-92d4-46e8-8845-f77b39a3b141" (UID: "5ac238c7-92d4-46e8-8845-f77b39a3b141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.811895 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ac238c7-92d4-46e8-8845-f77b39a3b141" (UID: "5ac238c7-92d4-46e8-8845-f77b39a3b141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.851409 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.879631 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhtj4\" (UniqueName: \"kubernetes.io/projected/5ac238c7-92d4-46e8-8845-f77b39a3b141-kube-api-access-qhtj4\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.879663 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.879676 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ac238c7-92d4-46e8-8845-f77b39a3b141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.893597 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.924881 4770 generic.go:334] "Generic (PLEG): container finished" podID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" exitCode=0 Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.924954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ac238c7-92d4-46e8-8845-f77b39a3b141","Type":"ContainerDied","Data":"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988"} Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.924986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5ac238c7-92d4-46e8-8845-f77b39a3b141","Type":"ContainerDied","Data":"d8c6af586c0f0290789369380f19a964eadd2863d1cdab30cfab28222bf27b35"} Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.925007 4770 scope.go:117] "RemoveContainer" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.925138 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.928689 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerStarted","Data":"d34640ff5a0cdba21fabee479bacdcd80da1417a508aec83fd14acbeba74cd6f"} Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.932004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a5bd373-f3aa-42ca-8360-32e1de10c999","Type":"ContainerStarted","Data":"418bfbd9102e524521475c275d0902d4daedd576720f250154744981f78ceff0"} Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.968996 4770 scope.go:117] "RemoveContainer" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" Jan 26 19:04:27 crc kubenswrapper[4770]: E0126 19:04:27.970051 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988\": container with ID starting with a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988 not found: ID does not exist" containerID="a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.970102 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988"} err="failed to get container status \"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988\": rpc error: code = NotFound desc = could not find container \"a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988\": container with ID starting with a6b10ee7984cd0396a96dd551471def3b345f367045f4b69962f03abcab15988 not found: ID does not exist" Jan 26 19:04:27 crc kubenswrapper[4770]: I0126 19:04:27.992024 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.006579 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.016502 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:28 crc kubenswrapper[4770]: E0126 19:04:28.016905 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerName="nova-scheduler-scheduler" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.016925 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerName="nova-scheduler-scheduler" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.017237 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" containerName="nova-scheduler-scheduler" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.017932 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.020321 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.029792 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.084524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.084621 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.084652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtft\" (UniqueName: \"kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.186734 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.186780 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtft\" (UniqueName: \"kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.186935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.192403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.192974 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.204516 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtft\" (UniqueName: \"kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft\") pod \"nova-scheduler-0\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.355992 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.860031 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.948415 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"500bf0cd-db31-4cda-b921-c069e9787b0d","Type":"ContainerStarted","Data":"5f1909c7d44ddc1c1c089ba8d596cd25298bb370802e52ebc4c7dafea8557638"} Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.952554 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6a5bd373-f3aa-42ca-8360-32e1de10c999","Type":"ContainerStarted","Data":"5f43f8574416b19c47d036a2f4b06ecb06668a0d56d809fa78faee60bedab052"} Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.952754 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.958646 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerStarted","Data":"5e8fc8571209ee3b5959a8a5d6708c6291bd65bac492d5f6e164a685e8b7e24d"} Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.958899 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerStarted","Data":"9773c9516f55867c109f990a77cf8f07c9fef5fd61a8425aa4287397877844f0"} Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.977815 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.977785119 podStartE2EDuration="2.977785119s" podCreationTimestamp="2026-01-26 19:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:28.966856419 +0000 UTC m=+1353.531763171" watchObservedRunningTime="2026-01-26 19:04:28.977785119 +0000 UTC m=+1353.542691851" Jan 26 19:04:28 crc kubenswrapper[4770]: I0126 19:04:28.994474 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9944527560000003 podStartE2EDuration="2.994452756s" podCreationTimestamp="2026-01-26 19:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:28.985447049 +0000 UTC m=+1353.550353781" watchObservedRunningTime="2026-01-26 19:04:28.994452756 +0000 UTC m=+1353.559359488" Jan 26 19:04:29 crc kubenswrapper[4770]: I0126 19:04:29.290931 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:04:29 crc kubenswrapper[4770]: I0126 19:04:29.291020 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:04:29 crc kubenswrapper[4770]: I0126 19:04:29.783461 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac238c7-92d4-46e8-8845-f77b39a3b141" path="/var/lib/kubelet/pods/5ac238c7-92d4-46e8-8845-f77b39a3b141/volumes" Jan 26 19:04:29 crc kubenswrapper[4770]: I0126 19:04:29.970469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"500bf0cd-db31-4cda-b921-c069e9787b0d","Type":"ContainerStarted","Data":"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c"} Jan 26 19:04:29 crc kubenswrapper[4770]: I0126 19:04:29.990207 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.990183599 podStartE2EDuration="2.990183599s" podCreationTimestamp="2026-01-26 19:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:29.98876107 +0000 UTC m=+1354.553667812" watchObservedRunningTime="2026-01-26 19:04:29.990183599 +0000 UTC m=+1354.555090351" Jan 26 19:04:30 crc kubenswrapper[4770]: I0126 19:04:30.330635 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:04:30 crc kubenswrapper[4770]: I0126 19:04:30.330743 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:04:31 crc kubenswrapper[4770]: I0126 19:04:31.966617 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:04:33 crc kubenswrapper[4770]: I0126 19:04:33.357806 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:04:34 crc kubenswrapper[4770]: I0126 19:04:34.291558 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:04:34 crc kubenswrapper[4770]: I0126 19:04:34.291622 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:04:35 crc kubenswrapper[4770]: I0126 19:04:35.307897 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:35 crc kubenswrapper[4770]: I0126 19:04:35.307916 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:35 crc kubenswrapper[4770]: I0126 19:04:35.604547 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:35 crc kubenswrapper[4770]: I0126 19:04:35.604772 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" containerName="kube-state-metrics" containerID="cri-o://d4228322eb071a12a8b1f78e5129821c87c0f48a6f2d5ec1c7d4ead922ff8815" gracePeriod=30 Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.085174 4770 generic.go:334] "Generic (PLEG): container finished" podID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" containerID="d4228322eb071a12a8b1f78e5129821c87c0f48a6f2d5ec1c7d4ead922ff8815" exitCode=2 Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.085406 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0","Type":"ContainerDied","Data":"d4228322eb071a12a8b1f78e5129821c87c0f48a6f2d5ec1c7d4ead922ff8815"} Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.225281 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.398656 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hhm\" (UniqueName: \"kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm\") pod \"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0\" (UID: \"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0\") " Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.404152 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm" (OuterVolumeSpecName: "kube-api-access-b9hhm") pod "809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" (UID: "809b98d0-f155-4506-8dd3-e0cb6c3a6ff0"). InnerVolumeSpecName "kube-api-access-b9hhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:36 crc kubenswrapper[4770]: I0126 19:04:36.500863 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9hhm\" (UniqueName: \"kubernetes.io/projected/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0-kube-api-access-b9hhm\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.106047 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"809b98d0-f155-4506-8dd3-e0cb6c3a6ff0","Type":"ContainerDied","Data":"49c5d55ddda65faab15ff450bca3172692d101f07aea0640d874e4dfca12fc9c"} Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.106368 4770 scope.go:117] "RemoveContainer" containerID="d4228322eb071a12a8b1f78e5129821c87c0f48a6f2d5ec1c7d4ead922ff8815" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.106488 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.155318 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.165958 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.180735 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: E0126 19:04:37.181251 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" containerName="kube-state-metrics" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.181270 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" containerName="kube-state-metrics" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.181457 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" containerName="kube-state-metrics" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.182228 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.184273 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.184742 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.216292 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.217792 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.218002 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.218671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.218749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw556\" (UniqueName: \"kubernetes.io/projected/6994181f-05b0-468c-911a-4f910e017419-kube-api-access-sw556\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.320503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.320590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.320638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw556\" (UniqueName: \"kubernetes.io/projected/6994181f-05b0-468c-911a-4f910e017419-kube-api-access-sw556\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.320724 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.324934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.325202 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.331283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6994181f-05b0-468c-911a-4f910e017419-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.338223 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw556\" (UniqueName: \"kubernetes.io/projected/6994181f-05b0-468c-911a-4f910e017419-kube-api-access-sw556\") pod \"kube-state-metrics-0\" (UID: \"6994181f-05b0-468c-911a-4f910e017419\") " pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.351588 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.351790 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.414310 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.500335 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.781104 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809b98d0-f155-4506-8dd3-e0cb6c3a6ff0" path="/var/lib/kubelet/pods/809b98d0-f155-4506-8dd3-e0cb6c3a6ff0/volumes" Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.816818 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.817228 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-central-agent" containerID="cri-o://3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36" gracePeriod=30 Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.817596 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="proxy-httpd" containerID="cri-o://09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce" gracePeriod=30 Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.817653 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="sg-core" containerID="cri-o://23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563" gracePeriod=30 Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.817730 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-notification-agent" containerID="cri-o://3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9" gracePeriod=30 Jan 26 19:04:37 crc kubenswrapper[4770]: I0126 19:04:37.964545 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 19:04:37 crc kubenswrapper[4770]: W0126 19:04:37.966148 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6994181f_05b0_468c_911a_4f910e017419.slice/crio-27a3a0ae49ec734ee82871306f56ec07f5fe22f4711b842ec9edf4bfde0d728f WatchSource:0}: Error finding container 27a3a0ae49ec734ee82871306f56ec07f5fe22f4711b842ec9edf4bfde0d728f: Status 404 returned error can't find the container with id 27a3a0ae49ec734ee82871306f56ec07f5fe22f4711b842ec9edf4bfde0d728f Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.121403 4770 generic.go:334] "Generic (PLEG): container finished" podID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerID="09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce" exitCode=0 Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.121435 4770 generic.go:334] "Generic (PLEG): container finished" podID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerID="23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563" exitCode=2 Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.121447 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerDied","Data":"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce"} Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.121499 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerDied","Data":"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563"} Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.122974 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6994181f-05b0-468c-911a-4f910e017419","Type":"ContainerStarted","Data":"27a3a0ae49ec734ee82871306f56ec07f5fe22f4711b842ec9edf4bfde0d728f"} Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.357133 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.391911 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.439824 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:38 crc kubenswrapper[4770]: I0126 19:04:38.440257 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.141228 4770 generic.go:334] "Generic (PLEG): container finished" podID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerID="3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36" exitCode=0 Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.141856 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerDied","Data":"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36"} Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.144999 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6994181f-05b0-468c-911a-4f910e017419","Type":"ContainerStarted","Data":"35d94e7f06960eeff7ae85f03d5c6e26cda65503d48dba2ee7f9d9a675688c16"} Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.145563 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.171783 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.788719352 podStartE2EDuration="2.17176212s" podCreationTimestamp="2026-01-26 19:04:37 +0000 UTC" firstStartedPulling="2026-01-26 19:04:37.968437739 +0000 UTC m=+1362.533344471" lastFinishedPulling="2026-01-26 19:04:38.351480507 +0000 UTC m=+1362.916387239" observedRunningTime="2026-01-26 19:04:39.164017606 +0000 UTC m=+1363.728924348" watchObservedRunningTime="2026-01-26 19:04:39.17176212 +0000 UTC m=+1363.736668842" Jan 26 19:04:39 crc kubenswrapper[4770]: I0126 19:04:39.178335 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.301975 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.307662 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.311466 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.766550 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.861642 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.861766 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.861905 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.861963 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862120 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpfbd\" (UniqueName: \"kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd\") pod \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\" (UID: \"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3\") " Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862324 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862441 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862670 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.862692 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.894807 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts" (OuterVolumeSpecName: "scripts") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.894901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd" (OuterVolumeSpecName: "kube-api-access-dpfbd") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "kube-api-access-dpfbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.899847 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.942565 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.965186 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.965219 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.965238 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpfbd\" (UniqueName: \"kubernetes.io/projected/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-kube-api-access-dpfbd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.965257 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:44 crc kubenswrapper[4770]: I0126 19:04:44.986758 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data" (OuterVolumeSpecName: "config-data") pod "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" (UID: "7df1c475-3b6e-4efb-bc53-f2b85cbf71a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.067344 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.203231 4770 generic.go:334] "Generic (PLEG): container finished" podID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerID="3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9" exitCode=0 Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.203371 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.204628 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerDied","Data":"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9"} Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.204673 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7df1c475-3b6e-4efb-bc53-f2b85cbf71a3","Type":"ContainerDied","Data":"fb53de670cc50bfde42cf7fc92dc0deec63919492538450275ff9a14492135d6"} Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.204714 4770 scope.go:117] "RemoveContainer" containerID="09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.211144 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.229956 4770 scope.go:117] "RemoveContainer" containerID="23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.265147 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.293008 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.319481 4770 scope.go:117] "RemoveContainer" containerID="3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.323826 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.325289 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-central-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325468 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-central-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.325488 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="proxy-httpd" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325497 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="proxy-httpd" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.325517 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="sg-core" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325525 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="sg-core" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.325544 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-notification-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325550 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-notification-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325814 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="sg-core" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325830 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="proxy-httpd" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325851 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-notification-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.325861 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" containerName="ceilometer-central-agent" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.328064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.331282 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.332000 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.334963 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.342539 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375148 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375297 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98khv\" (UniqueName: \"kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375516 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375621 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375731 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375862 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375893 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.375921 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.413984 4770 scope.go:117] "RemoveContainer" containerID="3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.441333 4770 scope.go:117] "RemoveContainer" containerID="09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.441833 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce\": container with ID starting with 09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce not found: ID does not exist" containerID="09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.441871 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce"} err="failed to get container status \"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce\": rpc error: code = NotFound desc = could not find container \"09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce\": container with ID starting with 09d04d834e9a9ed4e38678075c400410acaa901269f03b1ae7986b02bd0215ce not found: ID does not exist" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.441897 4770 scope.go:117] "RemoveContainer" containerID="23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.442359 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563\": container with ID starting with 23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563 not found: ID does not exist" containerID="23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.442391 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563"} err="failed to get container status \"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563\": rpc error: code = NotFound desc = could not find container \"23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563\": container with ID starting with 23c1c25d139c9122654b5eac57dcddc16a092a31710a2f655e6a06c6fcf47563 not found: ID does not exist" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.442415 4770 scope.go:117] "RemoveContainer" containerID="3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.442854 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9\": container with ID starting with 3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9 not found: ID does not exist" containerID="3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.442880 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9"} err="failed to get container status \"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9\": rpc error: code = NotFound desc = could not find container \"3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9\": container with ID starting with 3a2b8f77e3b212c8e110da693db8db3d4247b44984333c55558fecb0c21d97e9 not found: ID does not exist" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.442897 4770 scope.go:117] "RemoveContainer" containerID="3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36" Jan 26 19:04:45 crc kubenswrapper[4770]: E0126 19:04:45.443206 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36\": container with ID starting with 3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36 not found: ID does not exist" containerID="3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.443231 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36"} err="failed to get container status \"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36\": rpc error: code = NotFound desc = could not find container \"3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36\": container with ID starting with 3c5e552770c59786de2a47073aa7824c73c18ae83a18ddba68db07595c7c7e36 not found: ID does not exist" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.478991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98khv\" (UniqueName: \"kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479122 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479192 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479276 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479302 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.479381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.480022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.480099 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.484072 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.484437 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.484575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.486241 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.489463 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.495944 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98khv\" (UniqueName: \"kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv\") pod \"ceilometer-0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.706075 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:45 crc kubenswrapper[4770]: I0126 19:04:45.777849 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df1c475-3b6e-4efb-bc53-f2b85cbf71a3" path="/var/lib/kubelet/pods/7df1c475-3b6e-4efb-bc53-f2b85cbf71a3/volumes" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.151503 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.159394 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:46 crc kubenswrapper[4770]: W0126 19:04:46.160078 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8389a4b8_d37f_4b24_8447_ca4be67c43c0.slice/crio-2b21f6942851d1109d2ff249df8205e1daf711aa17c1f4c1930382bd51dcecbb WatchSource:0}: Error finding container 2b21f6942851d1109d2ff249df8205e1daf711aa17c1f4c1930382bd51dcecbb: Status 404 returned error can't find the container with id 2b21f6942851d1109d2ff249df8205e1daf711aa17c1f4c1930382bd51dcecbb Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.194645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle\") pod \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.195126 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data\") pod \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.195591 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkgfw\" (UniqueName: \"kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw\") pod \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\" (UID: \"f1e4248b-3c62-4777-8eeb-07cae864a0bc\") " Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.200627 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw" (OuterVolumeSpecName: "kube-api-access-tkgfw") pod "f1e4248b-3c62-4777-8eeb-07cae864a0bc" (UID: "f1e4248b-3c62-4777-8eeb-07cae864a0bc"). InnerVolumeSpecName "kube-api-access-tkgfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.218305 4770 generic.go:334] "Generic (PLEG): container finished" podID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" containerID="d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da" exitCode=137 Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.218428 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.219353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1e4248b-3c62-4777-8eeb-07cae864a0bc","Type":"ContainerDied","Data":"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da"} Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.219462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1e4248b-3c62-4777-8eeb-07cae864a0bc","Type":"ContainerDied","Data":"e5c2d9db8c8ceff2dce3624e358f036a2fcc00ac68e356032a36a8e9b1331de7"} Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.219546 4770 scope.go:117] "RemoveContainer" containerID="d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.222971 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerStarted","Data":"2b21f6942851d1109d2ff249df8205e1daf711aa17c1f4c1930382bd51dcecbb"} Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.223791 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data" (OuterVolumeSpecName: "config-data") pod "f1e4248b-3c62-4777-8eeb-07cae864a0bc" (UID: "f1e4248b-3c62-4777-8eeb-07cae864a0bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.227484 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1e4248b-3c62-4777-8eeb-07cae864a0bc" (UID: "f1e4248b-3c62-4777-8eeb-07cae864a0bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.246132 4770 scope.go:117] "RemoveContainer" containerID="d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da" Jan 26 19:04:46 crc kubenswrapper[4770]: E0126 19:04:46.246719 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da\": container with ID starting with d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da not found: ID does not exist" containerID="d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.246760 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da"} err="failed to get container status \"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da\": rpc error: code = NotFound desc = could not find container \"d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da\": container with ID starting with d7d5dcc98b7c31e455ae64009295e1547d73e250e7926343cebd4275d46c99da not found: ID does not exist" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.298844 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkgfw\" (UniqueName: \"kubernetes.io/projected/f1e4248b-3c62-4777-8eeb-07cae864a0bc-kube-api-access-tkgfw\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.298872 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.298880 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e4248b-3c62-4777-8eeb-07cae864a0bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.631995 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.644920 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.656864 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:46 crc kubenswrapper[4770]: E0126 19:04:46.657581 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.657657 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.657949 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.658659 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.664827 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.665372 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.665622 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.665788 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.705413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.705598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.705624 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.705645 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.705673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tgt\" (UniqueName: \"kubernetes.io/projected/fcf4af4b-a734-40c3-be45-ca0dd2a43124-kube-api-access-q6tgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.807490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.807550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.807584 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.807629 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tgt\" (UniqueName: \"kubernetes.io/projected/fcf4af4b-a734-40c3-be45-ca0dd2a43124-kube-api-access-q6tgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.807713 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.811610 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.823433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.823488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.823544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcf4af4b-a734-40c3-be45-ca0dd2a43124-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.829103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tgt\" (UniqueName: \"kubernetes.io/projected/fcf4af4b-a734-40c3-be45-ca0dd2a43124-kube-api-access-q6tgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"fcf4af4b-a734-40c3-be45-ca0dd2a43124\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:46 crc kubenswrapper[4770]: I0126 19:04:46.977173 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.240845 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerStarted","Data":"5e789a0d19fd59676a3132954fdffc70c689494e2e4d3a20b9bf2c2fbba6713c"} Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.241063 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerStarted","Data":"cff82115f31f2121d3c02651531688f3c2a74bd03a578f8027b386ac32277d9b"} Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.360232 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.361126 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.369328 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.377250 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.454528 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 19:04:47 crc kubenswrapper[4770]: W0126 19:04:47.462101 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcf4af4b_a734_40c3_be45_ca0dd2a43124.slice/crio-d3396de5ba244ec6584fabb05b00eb2237d49b8edbe637822aad18659dca794c WatchSource:0}: Error finding container d3396de5ba244ec6584fabb05b00eb2237d49b8edbe637822aad18659dca794c: Status 404 returned error can't find the container with id d3396de5ba244ec6584fabb05b00eb2237d49b8edbe637822aad18659dca794c Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.532459 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 19:04:47 crc kubenswrapper[4770]: I0126 19:04:47.784638 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e4248b-3c62-4777-8eeb-07cae864a0bc" path="/var/lib/kubelet/pods/f1e4248b-3c62-4777-8eeb-07cae864a0bc/volumes" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.251539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fcf4af4b-a734-40c3-be45-ca0dd2a43124","Type":"ContainerStarted","Data":"351b86a63dc71b0ce26ad533a1d786ee0359a3d3b584aae98db610724df32ae0"} Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.251896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fcf4af4b-a734-40c3-be45-ca0dd2a43124","Type":"ContainerStarted","Data":"d3396de5ba244ec6584fabb05b00eb2237d49b8edbe637822aad18659dca794c"} Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.254633 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerStarted","Data":"bbaa61ef1d4ccb509cd18d30d92d84e30ec072898eb9f8c0206ac8d20792436e"} Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.254910 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.267270 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.267233991 podStartE2EDuration="2.267233991s" podCreationTimestamp="2026-01-26 19:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:48.264501516 +0000 UTC m=+1372.829408248" watchObservedRunningTime="2026-01-26 19:04:48.267233991 +0000 UTC m=+1372.832140723" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.284741 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.471070 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.472640 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.492368 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.551992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcvz\" (UniqueName: \"kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.552030 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.552084 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.552120 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.552168 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.552204 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654247 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654611 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grcvz\" (UniqueName: \"kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654633 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.654741 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.656061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.656101 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.656112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.657033 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.657582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.689561 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grcvz\" (UniqueName: \"kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz\") pod \"dnsmasq-dns-898947885-9fsdq\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:48 crc kubenswrapper[4770]: I0126 19:04:48.798221 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:49 crc kubenswrapper[4770]: I0126 19:04:49.274512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerStarted","Data":"7ed70ee4b6b2e0ff5a85aab084ab447d9aa825f09b88155f26e0afd32280977b"} Jan 26 19:04:49 crc kubenswrapper[4770]: I0126 19:04:49.275137 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:04:49 crc kubenswrapper[4770]: I0126 19:04:49.297619 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.86148311 podStartE2EDuration="4.297601316s" podCreationTimestamp="2026-01-26 19:04:45 +0000 UTC" firstStartedPulling="2026-01-26 19:04:46.15679982 +0000 UTC m=+1370.721706552" lastFinishedPulling="2026-01-26 19:04:48.592918026 +0000 UTC m=+1373.157824758" observedRunningTime="2026-01-26 19:04:49.293689728 +0000 UTC m=+1373.858596460" watchObservedRunningTime="2026-01-26 19:04:49.297601316 +0000 UTC m=+1373.862508048" Jan 26 19:04:49 crc kubenswrapper[4770]: I0126 19:04:49.378755 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:04:50 crc kubenswrapper[4770]: I0126 19:04:50.289791 4770 generic.go:334] "Generic (PLEG): container finished" podID="039e5aac-4654-43b3-aa42-710246c88b00" containerID="85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574" exitCode=0 Jan 26 19:04:50 crc kubenswrapper[4770]: I0126 19:04:50.291668 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-898947885-9fsdq" event={"ID":"039e5aac-4654-43b3-aa42-710246c88b00","Type":"ContainerDied","Data":"85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574"} Jan 26 19:04:50 crc kubenswrapper[4770]: I0126 19:04:50.291719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-898947885-9fsdq" event={"ID":"039e5aac-4654-43b3-aa42-710246c88b00","Type":"ContainerStarted","Data":"2baf94c3c733fe7de49985e67db543e6fa11ee3398b510477241348cee97424e"} Jan 26 19:04:50 crc kubenswrapper[4770]: I0126 19:04:50.840079 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:50 crc kubenswrapper[4770]: I0126 19:04:50.868211 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.301733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-898947885-9fsdq" event={"ID":"039e5aac-4654-43b3-aa42-710246c88b00","Type":"ContainerStarted","Data":"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9"} Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.301844 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="sg-core" containerID="cri-o://bbaa61ef1d4ccb509cd18d30d92d84e30ec072898eb9f8c0206ac8d20792436e" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.301872 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-notification-agent" containerID="cri-o://5e789a0d19fd59676a3132954fdffc70c689494e2e4d3a20b9bf2c2fbba6713c" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.301851 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="proxy-httpd" containerID="cri-o://7ed70ee4b6b2e0ff5a85aab084ab447d9aa825f09b88155f26e0afd32280977b" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.302451 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-central-agent" containerID="cri-o://cff82115f31f2121d3c02651531688f3c2a74bd03a578f8027b386ac32277d9b" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.302473 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-log" containerID="cri-o://9773c9516f55867c109f990a77cf8f07c9fef5fd61a8425aa4287397877844f0" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.302511 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-api" containerID="cri-o://5e8fc8571209ee3b5959a8a5d6708c6291bd65bac492d5f6e164a685e8b7e24d" gracePeriod=30 Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.333709 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-898947885-9fsdq" podStartSLOduration=3.333673244 podStartE2EDuration="3.333673244s" podCreationTimestamp="2026-01-26 19:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:51.328291057 +0000 UTC m=+1375.893197809" watchObservedRunningTime="2026-01-26 19:04:51.333673244 +0000 UTC m=+1375.898579986" Jan 26 19:04:51 crc kubenswrapper[4770]: I0126 19:04:51.977483 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.314935 4770 generic.go:334] "Generic (PLEG): container finished" podID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerID="5e8fc8571209ee3b5959a8a5d6708c6291bd65bac492d5f6e164a685e8b7e24d" exitCode=0 Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.314964 4770 generic.go:334] "Generic (PLEG): container finished" podID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerID="9773c9516f55867c109f990a77cf8f07c9fef5fd61a8425aa4287397877844f0" exitCode=143 Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.315008 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerDied","Data":"5e8fc8571209ee3b5959a8a5d6708c6291bd65bac492d5f6e164a685e8b7e24d"} Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.315043 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerDied","Data":"9773c9516f55867c109f990a77cf8f07c9fef5fd61a8425aa4287397877844f0"} Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320528 4770 generic.go:334] "Generic (PLEG): container finished" podID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerID="7ed70ee4b6b2e0ff5a85aab084ab447d9aa825f09b88155f26e0afd32280977b" exitCode=0 Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320566 4770 generic.go:334] "Generic (PLEG): container finished" podID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerID="bbaa61ef1d4ccb509cd18d30d92d84e30ec072898eb9f8c0206ac8d20792436e" exitCode=2 Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320577 4770 generic.go:334] "Generic (PLEG): container finished" podID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerID="5e789a0d19fd59676a3132954fdffc70c689494e2e4d3a20b9bf2c2fbba6713c" exitCode=0 Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerDied","Data":"7ed70ee4b6b2e0ff5a85aab084ab447d9aa825f09b88155f26e0afd32280977b"} Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320746 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerDied","Data":"bbaa61ef1d4ccb509cd18d30d92d84e30ec072898eb9f8c0206ac8d20792436e"} Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.320757 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerDied","Data":"5e789a0d19fd59676a3132954fdffc70c689494e2e4d3a20b9bf2c2fbba6713c"} Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.321165 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.646975 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.748658 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtfn\" (UniqueName: \"kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn\") pod \"f9795e63-c02f-4b38-8e3d-59291af1f755\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.748770 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle\") pod \"f9795e63-c02f-4b38-8e3d-59291af1f755\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.748876 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs\") pod \"f9795e63-c02f-4b38-8e3d-59291af1f755\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.748939 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data\") pod \"f9795e63-c02f-4b38-8e3d-59291af1f755\" (UID: \"f9795e63-c02f-4b38-8e3d-59291af1f755\") " Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.749341 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs" (OuterVolumeSpecName: "logs") pod "f9795e63-c02f-4b38-8e3d-59291af1f755" (UID: "f9795e63-c02f-4b38-8e3d-59291af1f755"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.749893 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9795e63-c02f-4b38-8e3d-59291af1f755-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.755679 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn" (OuterVolumeSpecName: "kube-api-access-7vtfn") pod "f9795e63-c02f-4b38-8e3d-59291af1f755" (UID: "f9795e63-c02f-4b38-8e3d-59291af1f755"). InnerVolumeSpecName "kube-api-access-7vtfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.792151 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9795e63-c02f-4b38-8e3d-59291af1f755" (UID: "f9795e63-c02f-4b38-8e3d-59291af1f755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.809832 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data" (OuterVolumeSpecName: "config-data") pod "f9795e63-c02f-4b38-8e3d-59291af1f755" (UID: "f9795e63-c02f-4b38-8e3d-59291af1f755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.854893 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.854945 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtfn\" (UniqueName: \"kubernetes.io/projected/f9795e63-c02f-4b38-8e3d-59291af1f755-kube-api-access-7vtfn\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:52 crc kubenswrapper[4770]: I0126 19:04:52.854960 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9795e63-c02f-4b38-8e3d-59291af1f755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.335479 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.336436 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9795e63-c02f-4b38-8e3d-59291af1f755","Type":"ContainerDied","Data":"d34640ff5a0cdba21fabee479bacdcd80da1417a508aec83fd14acbeba74cd6f"} Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.336491 4770 scope.go:117] "RemoveContainer" containerID="5e8fc8571209ee3b5959a8a5d6708c6291bd65bac492d5f6e164a685e8b7e24d" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.372728 4770 scope.go:117] "RemoveContainer" containerID="9773c9516f55867c109f990a77cf8f07c9fef5fd61a8425aa4287397877844f0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.387404 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.399376 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.425272 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:53 crc kubenswrapper[4770]: E0126 19:04:53.425882 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-log" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.425901 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-log" Jan 26 19:04:53 crc kubenswrapper[4770]: E0126 19:04:53.425922 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-api" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.425928 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-api" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.426137 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-api" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.426163 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" containerName="nova-api-log" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.427429 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.431088 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.431145 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.431252 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.440265 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8hd\" (UniqueName: \"kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574646 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.574840 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.676861 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.676939 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.676975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz8hd\" (UniqueName: \"kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.677037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.677074 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.677098 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.677460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.683031 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.683496 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.685416 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.692479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.696950 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz8hd\" (UniqueName: \"kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd\") pod \"nova-api-0\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.747929 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:04:53 crc kubenswrapper[4770]: I0126 19:04:53.789850 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9795e63-c02f-4b38-8e3d-59291af1f755" path="/var/lib/kubelet/pods/f9795e63-c02f-4b38-8e3d-59291af1f755/volumes" Jan 26 19:04:54 crc kubenswrapper[4770]: I0126 19:04:54.287152 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:04:54 crc kubenswrapper[4770]: I0126 19:04:54.396518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerStarted","Data":"596c65f1bb1cc0b9081fbad9c1fd3af9b2899cff17b165bac517fb83978ac2e6"} Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.428936 4770 generic.go:334] "Generic (PLEG): container finished" podID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerID="cff82115f31f2121d3c02651531688f3c2a74bd03a578f8027b386ac32277d9b" exitCode=0 Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.428984 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerDied","Data":"cff82115f31f2121d3c02651531688f3c2a74bd03a578f8027b386ac32277d9b"} Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.431277 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerStarted","Data":"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f"} Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.431308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerStarted","Data":"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04"} Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.465505 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.465487729 podStartE2EDuration="2.465487729s" podCreationTimestamp="2026-01-26 19:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:55.464463982 +0000 UTC m=+1380.029370734" watchObservedRunningTime="2026-01-26 19:04:55.465487729 +0000 UTC m=+1380.030394461" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.587582 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721084 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721105 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721175 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98khv\" (UniqueName: \"kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721254 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721274 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721319 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.721366 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd\") pod \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\" (UID: \"8389a4b8-d37f-4b24-8447-ca4be67c43c0\") " Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.722458 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.730214 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.735235 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv" (OuterVolumeSpecName: "kube-api-access-98khv") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "kube-api-access-98khv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.735279 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts" (OuterVolumeSpecName: "scripts") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.766792 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.823864 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.823899 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.823915 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98khv\" (UniqueName: \"kubernetes.io/projected/8389a4b8-d37f-4b24-8447-ca4be67c43c0-kube-api-access-98khv\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.823927 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.823938 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8389a4b8-d37f-4b24-8447-ca4be67c43c0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.826235 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.829710 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.863578 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data" (OuterVolumeSpecName: "config-data") pod "8389a4b8-d37f-4b24-8447-ca4be67c43c0" (UID: "8389a4b8-d37f-4b24-8447-ca4be67c43c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.930772 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.930810 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:55 crc kubenswrapper[4770]: I0126 19:04:55.930822 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8389a4b8-d37f-4b24-8447-ca4be67c43c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.452748 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.452796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8389a4b8-d37f-4b24-8447-ca4be67c43c0","Type":"ContainerDied","Data":"2b21f6942851d1109d2ff249df8205e1daf711aa17c1f4c1930382bd51dcecbb"} Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.452835 4770 scope.go:117] "RemoveContainer" containerID="7ed70ee4b6b2e0ff5a85aab084ab447d9aa825f09b88155f26e0afd32280977b" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.485113 4770 scope.go:117] "RemoveContainer" containerID="bbaa61ef1d4ccb509cd18d30d92d84e30ec072898eb9f8c0206ac8d20792436e" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.488530 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.499497 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.521744 4770 scope.go:117] "RemoveContainer" containerID="5e789a0d19fd59676a3132954fdffc70c689494e2e4d3a20b9bf2c2fbba6713c" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.523161 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:56 crc kubenswrapper[4770]: E0126 19:04:56.523635 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="proxy-httpd" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.523660 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="proxy-httpd" Jan 26 19:04:56 crc kubenswrapper[4770]: E0126 19:04:56.523681 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-notification-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.523689 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-notification-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: E0126 19:04:56.523752 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-central-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.525625 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-central-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: E0126 19:04:56.525651 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="sg-core" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.525671 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="sg-core" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.526034 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-central-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.526058 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="proxy-httpd" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.526076 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="ceilometer-notification-agent" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.526092 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" containerName="sg-core" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.528167 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.532263 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.532360 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.532471 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.536120 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.573384 4770 scope.go:117] "RemoveContainer" containerID="cff82115f31f2121d3c02651531688f3c2a74bd03a578f8027b386ac32277d9b" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.646996 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-scripts\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647089 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-config-data\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647214 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rztkp\" (UniqueName: \"kubernetes.io/projected/3f809a30-5737-424e-b40c-5058d98726e4-kube-api-access-rztkp\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647251 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-run-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647537 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-log-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.647603 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-log-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-scripts\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-config-data\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.750980 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rztkp\" (UniqueName: \"kubernetes.io/projected/3f809a30-5737-424e-b40c-5058d98726e4-kube-api-access-rztkp\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.751089 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-run-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.751544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-log-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.751873 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f809a30-5737-424e-b40c-5058d98726e4-run-httpd\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.755565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-scripts\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.757436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.758513 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.761492 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.764115 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f809a30-5737-424e-b40c-5058d98726e4-config-data\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.777322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rztkp\" (UniqueName: \"kubernetes.io/projected/3f809a30-5737-424e-b40c-5058d98726e4-kube-api-access-rztkp\") pod \"ceilometer-0\" (UID: \"3f809a30-5737-424e-b40c-5058d98726e4\") " pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.860821 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 19:04:56 crc kubenswrapper[4770]: I0126 19:04:56.977814 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.005173 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.352659 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.464279 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f809a30-5737-424e-b40c-5058d98726e4","Type":"ContainerStarted","Data":"f63501a00e41431e2027f71f912718a04a2f5b28123c736f94ac4cdcb57c44dc"} Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.479903 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.638873 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n6fsv"] Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.640722 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.643488 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.655043 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.655335 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n6fsv"] Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.778179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlnr\" (UniqueName: \"kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.778275 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.778296 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.778326 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.778417 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8389a4b8-d37f-4b24-8447-ca4be67c43c0" path="/var/lib/kubelet/pods/8389a4b8-d37f-4b24-8447-ca4be67c43c0/volumes" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.880191 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlnr\" (UniqueName: \"kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.880318 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.880339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.880363 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.885296 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.886245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.888167 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:57 crc kubenswrapper[4770]: I0126 19:04:57.898367 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlnr\" (UniqueName: \"kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr\") pod \"nova-cell1-cell-mapping-n6fsv\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.008088 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.474732 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f809a30-5737-424e-b40c-5058d98726e4","Type":"ContainerStarted","Data":"0aea74559c2203085eb189785acf2eee01d6aa1b9f0790a17332c81de86c6d4f"} Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.475933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f809a30-5737-424e-b40c-5058d98726e4","Type":"ContainerStarted","Data":"a2f065a1b9e79d14afb64a10166c729ba25139d9c3b51ca34f1cd297849da018"} Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.546163 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n6fsv"] Jan 26 19:04:58 crc kubenswrapper[4770]: W0126 19:04:58.554626 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf79133e5_1315_4728_bbb2_7ad2912ed30b.slice/crio-07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454 WatchSource:0}: Error finding container 07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454: Status 404 returned error can't find the container with id 07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454 Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.800021 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.865934 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:04:58 crc kubenswrapper[4770]: I0126 19:04:58.866203 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="dnsmasq-dns" containerID="cri-o://0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd" gracePeriod=10 Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.431411 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.494802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n6fsv" event={"ID":"f79133e5-1315-4728-bbb2-7ad2912ed30b","Type":"ContainerStarted","Data":"2cbf6224b896fc14ff17249793441fef1e367b87b9bef13ac30656df6f8de035"} Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.494850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n6fsv" event={"ID":"f79133e5-1315-4728-bbb2-7ad2912ed30b","Type":"ContainerStarted","Data":"07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454"} Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.499908 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f809a30-5737-424e-b40c-5058d98726e4","Type":"ContainerStarted","Data":"a55329aeaa9d2d32254885366e8bed4aabad5453a15b2c853b5e178c1371006d"} Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.504643 4770 generic.go:334] "Generic (PLEG): container finished" podID="5388854f-bcd6-460e-be84-c329e053d5ae" containerID="0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd" exitCode=0 Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.504687 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" event={"ID":"5388854f-bcd6-460e-be84-c329e053d5ae","Type":"ContainerDied","Data":"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd"} Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.504723 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" event={"ID":"5388854f-bcd6-460e-be84-c329e053d5ae","Type":"ContainerDied","Data":"8729a8461795621aea09f56b67afc7b1558cda9df105e9f9a52e7ac4ba3d2049"} Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.504741 4770 scope.go:117] "RemoveContainer" containerID="0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.504882 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b884f959-6g8pb" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.513919 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n6fsv" podStartSLOduration=2.513905171 podStartE2EDuration="2.513905171s" podCreationTimestamp="2026-01-26 19:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:04:59.513380416 +0000 UTC m=+1384.078287148" watchObservedRunningTime="2026-01-26 19:04:59.513905171 +0000 UTC m=+1384.078811903" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.522655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.523082 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sj2g\" (UniqueName: \"kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.523165 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.523209 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.523251 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.523282 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb\") pod \"5388854f-bcd6-460e-be84-c329e053d5ae\" (UID: \"5388854f-bcd6-460e-be84-c329e053d5ae\") " Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.536826 4770 scope.go:117] "RemoveContainer" containerID="e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.559585 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g" (OuterVolumeSpecName: "kube-api-access-6sj2g") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "kube-api-access-6sj2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.610116 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.610414 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.616362 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config" (OuterVolumeSpecName: "config") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.616648 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.626093 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.626130 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.626143 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.626152 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sj2g\" (UniqueName: \"kubernetes.io/projected/5388854f-bcd6-460e-be84-c329e053d5ae-kube-api-access-6sj2g\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.626163 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.632629 4770 scope.go:117] "RemoveContainer" containerID="0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd" Jan 26 19:04:59 crc kubenswrapper[4770]: E0126 19:04:59.633076 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd\": container with ID starting with 0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd not found: ID does not exist" containerID="0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.633108 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd"} err="failed to get container status \"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd\": rpc error: code = NotFound desc = could not find container \"0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd\": container with ID starting with 0dab14ec9db8c5fe5fdb21d5e9c1f375efaa4f7aeb57aab689d991f301715bbd not found: ID does not exist" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.633140 4770 scope.go:117] "RemoveContainer" containerID="e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9" Jan 26 19:04:59 crc kubenswrapper[4770]: E0126 19:04:59.633448 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9\": container with ID starting with e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9 not found: ID does not exist" containerID="e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.633481 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9"} err="failed to get container status \"e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9\": rpc error: code = NotFound desc = could not find container \"e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9\": container with ID starting with e4db527d24a24ac4ff2d00a4dcaceb48acb2e94225b46f25d44f632e75ba10b9 not found: ID does not exist" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.641349 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5388854f-bcd6-460e-be84-c329e053d5ae" (UID: "5388854f-bcd6-460e-be84-c329e053d5ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.727457 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5388854f-bcd6-460e-be84-c329e053d5ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.916525 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:04:59 crc kubenswrapper[4770]: I0126 19:04:59.933736 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b884f959-6g8pb"] Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.330392 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.330457 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.330507 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.331349 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.331425 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd" gracePeriod=600 Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.528578 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f809a30-5737-424e-b40c-5058d98726e4","Type":"ContainerStarted","Data":"6ac7492071acf83744fff3a4c269bf444917ebaa349882a036947752938536ce"} Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.528837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.533882 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd" exitCode=0 Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.533958 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd"} Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.534192 4770 scope.go:117] "RemoveContainer" containerID="c87daf1a126cd93e465998417d60959f10223fe0df7679f35c5368eec51dbce0" Jan 26 19:05:00 crc kubenswrapper[4770]: I0126 19:05:00.572395 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.079128717 podStartE2EDuration="4.572374482s" podCreationTimestamp="2026-01-26 19:04:56 +0000 UTC" firstStartedPulling="2026-01-26 19:04:57.36332323 +0000 UTC m=+1381.928229972" lastFinishedPulling="2026-01-26 19:04:59.856569005 +0000 UTC m=+1384.421475737" observedRunningTime="2026-01-26 19:05:00.560667088 +0000 UTC m=+1385.125573830" watchObservedRunningTime="2026-01-26 19:05:00.572374482 +0000 UTC m=+1385.137281214" Jan 26 19:05:01 crc kubenswrapper[4770]: I0126 19:05:01.550072 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3"} Jan 26 19:05:01 crc kubenswrapper[4770]: I0126 19:05:01.781185 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" path="/var/lib/kubelet/pods/5388854f-bcd6-460e-be84-c329e053d5ae/volumes" Jan 26 19:05:03 crc kubenswrapper[4770]: I0126 19:05:03.748185 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:05:03 crc kubenswrapper[4770]: I0126 19:05:03.748550 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:05:04 crc kubenswrapper[4770]: I0126 19:05:04.594486 4770 generic.go:334] "Generic (PLEG): container finished" podID="f79133e5-1315-4728-bbb2-7ad2912ed30b" containerID="2cbf6224b896fc14ff17249793441fef1e367b87b9bef13ac30656df6f8de035" exitCode=0 Jan 26 19:05:04 crc kubenswrapper[4770]: I0126 19:05:04.594810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n6fsv" event={"ID":"f79133e5-1315-4728-bbb2-7ad2912ed30b","Type":"ContainerDied","Data":"2cbf6224b896fc14ff17249793441fef1e367b87b9bef13ac30656df6f8de035"} Jan 26 19:05:04 crc kubenswrapper[4770]: I0126 19:05:04.766913 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:04 crc kubenswrapper[4770]: I0126 19:05:04.766887 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.029862 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.186586 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts\") pod \"f79133e5-1315-4728-bbb2-7ad2912ed30b\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.186714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rlnr\" (UniqueName: \"kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr\") pod \"f79133e5-1315-4728-bbb2-7ad2912ed30b\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.186741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle\") pod \"f79133e5-1315-4728-bbb2-7ad2912ed30b\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.186878 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data\") pod \"f79133e5-1315-4728-bbb2-7ad2912ed30b\" (UID: \"f79133e5-1315-4728-bbb2-7ad2912ed30b\") " Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.192935 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr" (OuterVolumeSpecName: "kube-api-access-4rlnr") pod "f79133e5-1315-4728-bbb2-7ad2912ed30b" (UID: "f79133e5-1315-4728-bbb2-7ad2912ed30b"). InnerVolumeSpecName "kube-api-access-4rlnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.195911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts" (OuterVolumeSpecName: "scripts") pod "f79133e5-1315-4728-bbb2-7ad2912ed30b" (UID: "f79133e5-1315-4728-bbb2-7ad2912ed30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.214299 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data" (OuterVolumeSpecName: "config-data") pod "f79133e5-1315-4728-bbb2-7ad2912ed30b" (UID: "f79133e5-1315-4728-bbb2-7ad2912ed30b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.222345 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f79133e5-1315-4728-bbb2-7ad2912ed30b" (UID: "f79133e5-1315-4728-bbb2-7ad2912ed30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.289393 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.289451 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.289467 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rlnr\" (UniqueName: \"kubernetes.io/projected/f79133e5-1315-4728-bbb2-7ad2912ed30b-kube-api-access-4rlnr\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.289482 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79133e5-1315-4728-bbb2-7ad2912ed30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.650886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n6fsv" event={"ID":"f79133e5-1315-4728-bbb2-7ad2912ed30b","Type":"ContainerDied","Data":"07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454"} Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.650935 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d2e0503a92d985d05e780e65196feb1b3b37e104f299f5a1a1b17f4ac2c454" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.651019 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n6fsv" Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.885389 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.885762 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.885974 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-log" containerID="cri-o://5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04" gracePeriod=30 Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.886168 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerName="nova-scheduler-scheduler" containerID="cri-o://e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" gracePeriod=30 Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.886626 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-api" containerID="cri-o://114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f" gracePeriod=30 Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.905724 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.905994 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-log" containerID="cri-o://e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a" gracePeriod=30 Jan 26 19:05:06 crc kubenswrapper[4770]: I0126 19:05:06.908121 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-metadata" containerID="cri-o://51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773" gracePeriod=30 Jan 26 19:05:07 crc kubenswrapper[4770]: I0126 19:05:07.660650 4770 generic.go:334] "Generic (PLEG): container finished" podID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerID="5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04" exitCode=143 Jan 26 19:05:07 crc kubenswrapper[4770]: I0126 19:05:07.660728 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerDied","Data":"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04"} Jan 26 19:05:07 crc kubenswrapper[4770]: I0126 19:05:07.662540 4770 generic.go:334] "Generic (PLEG): container finished" podID="7148ba5e-8608-4b09-b041-2099677ae056" containerID="e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a" exitCode=143 Jan 26 19:05:07 crc kubenswrapper[4770]: I0126 19:05:07.662560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerDied","Data":"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a"} Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.212371 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.352784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data\") pod \"7148ba5e-8608-4b09-b041-2099677ae056\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.352847 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs\") pod \"7148ba5e-8608-4b09-b041-2099677ae056\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.352923 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle\") pod \"7148ba5e-8608-4b09-b041-2099677ae056\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.353081 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk77g\" (UniqueName: \"kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g\") pod \"7148ba5e-8608-4b09-b041-2099677ae056\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.353136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs\") pod \"7148ba5e-8608-4b09-b041-2099677ae056\" (UID: \"7148ba5e-8608-4b09-b041-2099677ae056\") " Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.354532 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs" (OuterVolumeSpecName: "logs") pod "7148ba5e-8608-4b09-b041-2099677ae056" (UID: "7148ba5e-8608-4b09-b041-2099677ae056"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.361660 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.381851 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g" (OuterVolumeSpecName: "kube-api-access-fk77g") pod "7148ba5e-8608-4b09-b041-2099677ae056" (UID: "7148ba5e-8608-4b09-b041-2099677ae056"). InnerVolumeSpecName "kube-api-access-fk77g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.381961 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.390157 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7148ba5e-8608-4b09-b041-2099677ae056" (UID: "7148ba5e-8608-4b09-b041-2099677ae056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.393895 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.393941 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerName="nova-scheduler-scheduler" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.396614 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data" (OuterVolumeSpecName: "config-data") pod "7148ba5e-8608-4b09-b041-2099677ae056" (UID: "7148ba5e-8608-4b09-b041-2099677ae056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.442128 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7148ba5e-8608-4b09-b041-2099677ae056" (UID: "7148ba5e-8608-4b09-b041-2099677ae056"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.455676 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.455733 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7148ba5e-8608-4b09-b041-2099677ae056-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.455744 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.455755 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk77g\" (UniqueName: \"kubernetes.io/projected/7148ba5e-8608-4b09-b041-2099677ae056-kube-api-access-fk77g\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.455772 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7148ba5e-8608-4b09-b041-2099677ae056-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.673359 4770 generic.go:334] "Generic (PLEG): container finished" podID="7148ba5e-8608-4b09-b041-2099677ae056" containerID="51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773" exitCode=0 Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.673407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerDied","Data":"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773"} Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.673436 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7148ba5e-8608-4b09-b041-2099677ae056","Type":"ContainerDied","Data":"f20e54b53ec64a025edd3fd213bf8b5ad2d5d7b18a499f7a0df0ad940f56a05c"} Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.673454 4770 scope.go:117] "RemoveContainer" containerID="51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.673647 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.719817 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.728976 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.729875 4770 scope.go:117] "RemoveContainer" containerID="e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.752434 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.753071 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79133e5-1315-4728-bbb2-7ad2912ed30b" containerName="nova-manage" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753096 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79133e5-1315-4728-bbb2-7ad2912ed30b" containerName="nova-manage" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.753120 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-log" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753129 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-log" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.753142 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-metadata" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753150 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-metadata" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.753166 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="init" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753174 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="init" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.753208 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="dnsmasq-dns" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753217 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="dnsmasq-dns" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753420 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-log" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753451 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7148ba5e-8608-4b09-b041-2099677ae056" containerName="nova-metadata-metadata" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753469 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79133e5-1315-4728-bbb2-7ad2912ed30b" containerName="nova-manage" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.753481 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5388854f-bcd6-460e-be84-c329e053d5ae" containerName="dnsmasq-dns" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.754892 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.757972 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.758116 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.764961 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.784574 4770 scope.go:117] "RemoveContainer" containerID="51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.785205 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773\": container with ID starting with 51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773 not found: ID does not exist" containerID="51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.785253 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773"} err="failed to get container status \"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773\": rpc error: code = NotFound desc = could not find container \"51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773\": container with ID starting with 51a2b16feb759e8ea2a76eb8951d47d29ca88c7b14df773bbcc4acf02dc13773 not found: ID does not exist" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.785280 4770 scope.go:117] "RemoveContainer" containerID="e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a" Jan 26 19:05:08 crc kubenswrapper[4770]: E0126 19:05:08.785568 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a\": container with ID starting with e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a not found: ID does not exist" containerID="e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.785621 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a"} err="failed to get container status \"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a\": rpc error: code = NotFound desc = could not find container \"e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a\": container with ID starting with e2c010ae318ca9af068ef992aaa327cee6f30a4eb1357c344ad7b1eb05fc593a not found: ID does not exist" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.863294 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-config-data\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.863389 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.863416 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7781b437-1736-47ca-b461-7fc8359ef733-logs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.863451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.863513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsvj\" (UniqueName: \"kubernetes.io/projected/7781b437-1736-47ca-b461-7fc8359ef733-kube-api-access-vvsvj\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.964921 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsvj\" (UniqueName: \"kubernetes.io/projected/7781b437-1736-47ca-b461-7fc8359ef733-kube-api-access-vvsvj\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.965056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-config-data\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.965118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.965144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7781b437-1736-47ca-b461-7fc8359ef733-logs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.965177 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.965893 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7781b437-1736-47ca-b461-7fc8359ef733-logs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.969009 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.969113 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-config-data\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.969709 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7781b437-1736-47ca-b461-7fc8359ef733-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:08 crc kubenswrapper[4770]: I0126 19:05:08.980054 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsvj\" (UniqueName: \"kubernetes.io/projected/7781b437-1736-47ca-b461-7fc8359ef733-kube-api-access-vvsvj\") pod \"nova-metadata-0\" (UID: \"7781b437-1736-47ca-b461-7fc8359ef733\") " pod="openstack/nova-metadata-0" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.089110 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.536650 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.626291 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.678621 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.684120 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz8hd\" (UniqueName: \"kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.684434 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.684910 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs" (OuterVolumeSpecName: "logs") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.685240 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.685727 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.685918 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle\") pod \"2b95ad96-d640-41ec-9852-ddcf5424f174\" (UID: \"2b95ad96-d640-41ec-9852-ddcf5424f174\") " Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.687140 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b95ad96-d640-41ec-9852-ddcf5424f174-logs\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689110 4770 generic.go:334] "Generic (PLEG): container finished" podID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerID="114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f" exitCode=0 Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689233 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd" (OuterVolumeSpecName: "kube-api-access-zz8hd") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "kube-api-access-zz8hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerDied","Data":"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f"} Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689391 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b95ad96-d640-41ec-9852-ddcf5424f174","Type":"ContainerDied","Data":"596c65f1bb1cc0b9081fbad9c1fd3af9b2899cff17b165bac517fb83978ac2e6"} Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689421 4770 scope.go:117] "RemoveContainer" containerID="114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.689493 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.694089 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7781b437-1736-47ca-b461-7fc8359ef733","Type":"ContainerStarted","Data":"53e3fd3f2f0723a504cb2dcf97a960d19873b69581f285a1b8e0666539b34c24"} Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.720488 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.728079 4770 scope.go:117] "RemoveContainer" containerID="5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.751534 4770 scope.go:117] "RemoveContainer" containerID="114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f" Jan 26 19:05:09 crc kubenswrapper[4770]: E0126 19:05:09.752414 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f\": container with ID starting with 114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f not found: ID does not exist" containerID="114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.752457 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f"} err="failed to get container status \"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f\": rpc error: code = NotFound desc = could not find container \"114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f\": container with ID starting with 114ffa4fb6170849465d86acd9e79edb4a120b4ac7c24cc1726c44c34d41d72f not found: ID does not exist" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.752484 4770 scope.go:117] "RemoveContainer" containerID="5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04" Jan 26 19:05:09 crc kubenswrapper[4770]: E0126 19:05:09.752890 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04\": container with ID starting with 5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04 not found: ID does not exist" containerID="5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.752920 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04"} err="failed to get container status \"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04\": rpc error: code = NotFound desc = could not find container \"5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04\": container with ID starting with 5f665e5cbbf4eca750bc376efbb3341f105d4302210a153a57c50207e0152f04 not found: ID does not exist" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.758730 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data" (OuterVolumeSpecName: "config-data") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.759755 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.764644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2b95ad96-d640-41ec-9852-ddcf5424f174" (UID: "2b95ad96-d640-41ec-9852-ddcf5424f174"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794202 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794273 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794287 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794301 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b95ad96-d640-41ec-9852-ddcf5424f174-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794348 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz8hd\" (UniqueName: \"kubernetes.io/projected/2b95ad96-d640-41ec-9852-ddcf5424f174-kube-api-access-zz8hd\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:09 crc kubenswrapper[4770]: I0126 19:05:09.794438 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7148ba5e-8608-4b09-b041-2099677ae056" path="/var/lib/kubelet/pods/7148ba5e-8608-4b09-b041-2099677ae056/volumes" Jan 26 19:05:10 crc kubenswrapper[4770]: E0126 19:05:10.003324 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b95ad96_d640_41ec_9852_ddcf5424f174.slice/crio-596c65f1bb1cc0b9081fbad9c1fd3af9b2899cff17b165bac517fb83978ac2e6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b95ad96_d640_41ec_9852_ddcf5424f174.slice\": RecentStats: unable to find data in memory cache]" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.041756 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.053528 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.070800 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:10 crc kubenswrapper[4770]: E0126 19:05:10.071298 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-api" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.071320 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-api" Jan 26 19:05:10 crc kubenswrapper[4770]: E0126 19:05:10.071363 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-log" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.071405 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-log" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.071627 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-api" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.071690 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" containerName="nova-api-log" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.072887 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.077108 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.077415 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.077575 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.101338 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215619 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-config-data\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215711 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215782 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6wcv\" (UniqueName: \"kubernetes.io/projected/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-kube-api-access-x6wcv\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-logs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215834 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.215958 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-config-data\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318491 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6wcv\" (UniqueName: \"kubernetes.io/projected/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-kube-api-access-x6wcv\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318546 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-logs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.318786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.319999 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-logs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.324738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.324761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.324894 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.339182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-config-data\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.342576 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6wcv\" (UniqueName: \"kubernetes.io/projected/8af7a04a-7f7c-4e64-ab2d-40bb252db6ae-kube-api-access-x6wcv\") pod \"nova-api-0\" (UID: \"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae\") " pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.436549 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.705980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7781b437-1736-47ca-b461-7fc8359ef733","Type":"ContainerStarted","Data":"85247c2ddb439623db46bcb624ed9ac56ed9dc049cc1fe42498fc829e53e0325"} Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.706291 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7781b437-1736-47ca-b461-7fc8359ef733","Type":"ContainerStarted","Data":"1a2700aa40d784ee637ed8e99ceb5e3d3f41b37f7788403250ec66768d5b2ac2"} Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.725912 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.725892883 podStartE2EDuration="2.725892883s" podCreationTimestamp="2026-01-26 19:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:05:10.72325897 +0000 UTC m=+1395.288165752" watchObservedRunningTime="2026-01-26 19:05:10.725892883 +0000 UTC m=+1395.290799615" Jan 26 19:05:10 crc kubenswrapper[4770]: I0126 19:05:10.889011 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 19:05:11 crc kubenswrapper[4770]: I0126 19:05:11.721141 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae","Type":"ContainerStarted","Data":"7e962e70caad708abec5c2d2fee119d1ec56ab89e1cc6752bff48865eea463e7"} Jan 26 19:05:11 crc kubenswrapper[4770]: I0126 19:05:11.721477 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae","Type":"ContainerStarted","Data":"8f74c513e35fc0ca6c0322de9a77b00fe2b1f338a162180fd9397aadf016073d"} Jan 26 19:05:11 crc kubenswrapper[4770]: I0126 19:05:11.721492 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8af7a04a-7f7c-4e64-ab2d-40bb252db6ae","Type":"ContainerStarted","Data":"58d90047b20d2bdadab40b49d182e1642c71cb4fc1d833579463bc31c6bd1ef5"} Jan 26 19:05:11 crc kubenswrapper[4770]: I0126 19:05:11.745391 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.745371037 podStartE2EDuration="1.745371037s" podCreationTimestamp="2026-01-26 19:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:05:11.74043056 +0000 UTC m=+1396.305337322" watchObservedRunningTime="2026-01-26 19:05:11.745371037 +0000 UTC m=+1396.310277769" Jan 26 19:05:11 crc kubenswrapper[4770]: I0126 19:05:11.780392 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b95ad96-d640-41ec-9852-ddcf5424f174" path="/var/lib/kubelet/pods/2b95ad96-d640-41ec-9852-ddcf5424f174/volumes" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.626614 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.744437 4770 generic.go:334] "Generic (PLEG): container finished" podID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" exitCode=0 Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.744664 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.744813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"500bf0cd-db31-4cda-b921-c069e9787b0d","Type":"ContainerDied","Data":"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c"} Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.744884 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"500bf0cd-db31-4cda-b921-c069e9787b0d","Type":"ContainerDied","Data":"5f1909c7d44ddc1c1c089ba8d596cd25298bb370802e52ebc4c7dafea8557638"} Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.744913 4770 scope.go:117] "RemoveContainer" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.764354 4770 scope.go:117] "RemoveContainer" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" Jan 26 19:05:12 crc kubenswrapper[4770]: E0126 19:05:12.764922 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c\": container with ID starting with e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c not found: ID does not exist" containerID="e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.764971 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c"} err="failed to get container status \"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c\": rpc error: code = NotFound desc = could not find container \"e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c\": container with ID starting with e0a7670166256bfa0bd7cf4eec24d5cd758e11f56a61db4e75699b85c0400d8c not found: ID does not exist" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.771293 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtft\" (UniqueName: \"kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft\") pod \"500bf0cd-db31-4cda-b921-c069e9787b0d\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.771359 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data\") pod \"500bf0cd-db31-4cda-b921-c069e9787b0d\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.771495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle\") pod \"500bf0cd-db31-4cda-b921-c069e9787b0d\" (UID: \"500bf0cd-db31-4cda-b921-c069e9787b0d\") " Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.778768 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft" (OuterVolumeSpecName: "kube-api-access-kgtft") pod "500bf0cd-db31-4cda-b921-c069e9787b0d" (UID: "500bf0cd-db31-4cda-b921-c069e9787b0d"). InnerVolumeSpecName "kube-api-access-kgtft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.802371 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data" (OuterVolumeSpecName: "config-data") pod "500bf0cd-db31-4cda-b921-c069e9787b0d" (UID: "500bf0cd-db31-4cda-b921-c069e9787b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.804567 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "500bf0cd-db31-4cda-b921-c069e9787b0d" (UID: "500bf0cd-db31-4cda-b921-c069e9787b0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.874180 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.874214 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500bf0cd-db31-4cda-b921-c069e9787b0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:12 crc kubenswrapper[4770]: I0126 19:05:12.874224 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgtft\" (UniqueName: \"kubernetes.io/projected/500bf0cd-db31-4cda-b921-c069e9787b0d-kube-api-access-kgtft\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.087614 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.106414 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.117954 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:13 crc kubenswrapper[4770]: E0126 19:05:13.118464 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerName="nova-scheduler-scheduler" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.118485 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerName="nova-scheduler-scheduler" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.119016 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" containerName="nova-scheduler-scheduler" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.119976 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.122526 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.130517 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.283768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.283888 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-config-data\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.283946 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxl7\" (UniqueName: \"kubernetes.io/projected/76beecce-14cb-4546-9054-5b8bdd4293d9-kube-api-access-5lxl7\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.385412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-config-data\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.385473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxl7\" (UniqueName: \"kubernetes.io/projected/76beecce-14cb-4546-9054-5b8bdd4293d9-kube-api-access-5lxl7\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.385573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.389359 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.389680 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76beecce-14cb-4546-9054-5b8bdd4293d9-config-data\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.403120 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxl7\" (UniqueName: \"kubernetes.io/projected/76beecce-14cb-4546-9054-5b8bdd4293d9-kube-api-access-5lxl7\") pod \"nova-scheduler-0\" (UID: \"76beecce-14cb-4546-9054-5b8bdd4293d9\") " pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.439005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.792739 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500bf0cd-db31-4cda-b921-c069e9787b0d" path="/var/lib/kubelet/pods/500bf0cd-db31-4cda-b921-c069e9787b0d/volumes" Jan 26 19:05:13 crc kubenswrapper[4770]: I0126 19:05:13.929805 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 19:05:14 crc kubenswrapper[4770]: I0126 19:05:14.090400 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:05:14 crc kubenswrapper[4770]: I0126 19:05:14.090459 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 19:05:14 crc kubenswrapper[4770]: I0126 19:05:14.774104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76beecce-14cb-4546-9054-5b8bdd4293d9","Type":"ContainerStarted","Data":"ed60b3aaf0f3ee36659f564b5ac168a31f3335c947c49e279e9a68230d1db5df"} Jan 26 19:05:14 crc kubenswrapper[4770]: I0126 19:05:14.774537 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"76beecce-14cb-4546-9054-5b8bdd4293d9","Type":"ContainerStarted","Data":"e766510c8b657d29b4fb31d70a79a18480290e9135bd6201225c81ac2bb9fb7f"} Jan 26 19:05:14 crc kubenswrapper[4770]: I0126 19:05:14.808916 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.808877269 podStartE2EDuration="1.808877269s" podCreationTimestamp="2026-01-26 19:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:05:14.79588733 +0000 UTC m=+1399.360794072" watchObservedRunningTime="2026-01-26 19:05:14.808877269 +0000 UTC m=+1399.373784001" Jan 26 19:05:18 crc kubenswrapper[4770]: I0126 19:05:18.440068 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 19:05:19 crc kubenswrapper[4770]: I0126 19:05:19.090134 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:05:19 crc kubenswrapper[4770]: I0126 19:05:19.090778 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 19:05:20 crc kubenswrapper[4770]: I0126 19:05:20.101939 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7781b437-1736-47ca-b461-7fc8359ef733" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:20 crc kubenswrapper[4770]: I0126 19:05:20.101935 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7781b437-1736-47ca-b461-7fc8359ef733" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:20 crc kubenswrapper[4770]: I0126 19:05:20.436994 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:05:20 crc kubenswrapper[4770]: I0126 19:05:20.437385 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 19:05:21 crc kubenswrapper[4770]: I0126 19:05:21.444858 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8af7a04a-7f7c-4e64-ab2d-40bb252db6ae" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:21 crc kubenswrapper[4770]: I0126 19:05:21.453859 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8af7a04a-7f7c-4e64-ab2d-40bb252db6ae" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:05:23 crc kubenswrapper[4770]: I0126 19:05:23.440233 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 19:05:23 crc kubenswrapper[4770]: I0126 19:05:23.486670 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 19:05:23 crc kubenswrapper[4770]: I0126 19:05:23.934898 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 19:05:26 crc kubenswrapper[4770]: I0126 19:05:26.875082 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 19:05:29 crc kubenswrapper[4770]: I0126 19:05:29.097360 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:05:29 crc kubenswrapper[4770]: I0126 19:05:29.098734 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 19:05:29 crc kubenswrapper[4770]: I0126 19:05:29.105663 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:05:29 crc kubenswrapper[4770]: I0126 19:05:29.957994 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.446468 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.447156 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.450557 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.455444 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.961859 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 19:05:30 crc kubenswrapper[4770]: I0126 19:05:30.973088 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 19:05:40 crc kubenswrapper[4770]: I0126 19:05:40.831779 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:41 crc kubenswrapper[4770]: I0126 19:05:41.762648 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:44 crc kubenswrapper[4770]: I0126 19:05:44.474378 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="rabbitmq" containerID="cri-o://8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735" gracePeriod=604797 Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.124176 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.126904 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.205510 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.254441 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="rabbitmq" containerID="cri-o://73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba" gracePeriod=604797 Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.309854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.310276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.310354 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhmv\" (UniqueName: \"kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.412190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhmv\" (UniqueName: \"kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.412308 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.412374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.412848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.412955 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.432925 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhmv\" (UniqueName: \"kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv\") pod \"redhat-operators-smmrh\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:45 crc kubenswrapper[4770]: I0126 19:05:45.523895 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.071609 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.148580 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.231754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.231837 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.231890 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.231952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232025 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232097 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232258 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232346 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232412 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232492 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.232534 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khplx\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx\") pod \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\" (UID: \"876c1ba4-ebd2-47b9-80d0-5158053c4fb8\") " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.239121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.240655 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.242373 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx" (OuterVolumeSpecName: "kube-api-access-khplx") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "kube-api-access-khplx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.242836 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.243079 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.246750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.249615 4770 generic.go:334] "Generic (PLEG): container finished" podID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerID="8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735" exitCode=0 Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.249769 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerDied","Data":"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735"} Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.249830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"876c1ba4-ebd2-47b9-80d0-5158053c4fb8","Type":"ContainerDied","Data":"3ef9a1a2c9a1a10cf1c9bff02dd4997460ae80a0f4b16ef987567a9de8166e20"} Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.249858 4770 scope.go:117] "RemoveContainer" containerID="8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.250102 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.255263 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info" (OuterVolumeSpecName: "pod-info") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.260510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerStarted","Data":"8dc98b1231eb4e23a992dd80d63dd340a44fdf2a63b7558e196151f03a086409"} Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.266482 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.288684 4770 scope.go:117] "RemoveContainer" containerID="5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336099 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336147 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khplx\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-kube-api-access-khplx\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336163 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336203 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336221 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336232 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336243 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336257 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.336997 4770 scope.go:117] "RemoveContainer" containerID="8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735" Jan 26 19:05:46 crc kubenswrapper[4770]: E0126 19:05:46.337609 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735\": container with ID starting with 8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735 not found: ID does not exist" containerID="8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.337660 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735"} err="failed to get container status \"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735\": rpc error: code = NotFound desc = could not find container \"8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735\": container with ID starting with 8baae19fbb0529cf446cd3fc5b13a5652ed15617d81e94abc5dae482e7eb8735 not found: ID does not exist" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.337688 4770 scope.go:117] "RemoveContainer" containerID="5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698" Jan 26 19:05:46 crc kubenswrapper[4770]: E0126 19:05:46.342397 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698\": container with ID starting with 5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698 not found: ID does not exist" containerID="5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.342437 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698"} err="failed to get container status \"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698\": rpc error: code = NotFound desc = could not find container \"5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698\": container with ID starting with 5b67114b9a8aa4a41f455823db9c0aefab12c8a87dd8e328798375f08b86e698 not found: ID does not exist" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.420268 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.436479 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data" (OuterVolumeSpecName: "config-data") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.454818 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.454859 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.511931 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf" (OuterVolumeSpecName: "server-conf") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.580288 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.609253 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "876c1ba4-ebd2-47b9-80d0-5158053c4fb8" (UID: "876c1ba4-ebd2-47b9-80d0-5158053c4fb8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.682934 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/876c1ba4-ebd2-47b9-80d0-5158053c4fb8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.926779 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.940034 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.959986 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:46 crc kubenswrapper[4770]: E0126 19:05:46.960472 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="setup-container" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.960497 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="setup-container" Jan 26 19:05:46 crc kubenswrapper[4770]: E0126 19:05:46.960517 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="rabbitmq" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.960526 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="rabbitmq" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.960756 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" containerName="rabbitmq" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.962070 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.963715 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.963896 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.968591 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.968848 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.969128 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sw2ks" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.969329 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.969572 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.985194 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:46 crc kubenswrapper[4770]: I0126 19:05:46.992301 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126298 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126439 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126475 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126504 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126541 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126656 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8ldz\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126757 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126847 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126877 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.126906 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret\") pod \"176a0205-a131-4510-bcf5-420945c4c6ee\" (UID: \"176a0205-a131-4510-bcf5-420945c4c6ee\") " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w874f\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-kube-api-access-w874f\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127383 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22b25319-9d84-42f2-b5ed-127c06f29bbb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127496 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-config-data\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127519 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127636 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127813 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22b25319-9d84-42f2-b5ed-127c06f29bbb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.127888 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.144472 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.149972 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info" (OuterVolumeSpecName: "pod-info") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.155975 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz" (OuterVolumeSpecName: "kube-api-access-v8ldz") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "kube-api-access-v8ldz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.156416 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.160067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.160825 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.164981 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.165615 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.179483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data" (OuterVolumeSpecName: "config-data") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.232999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-config-data\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233070 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233142 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233220 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233244 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22b25319-9d84-42f2-b5ed-127c06f29bbb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233271 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233332 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w874f\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-kube-api-access-w874f\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233385 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22b25319-9d84-42f2-b5ed-127c06f29bbb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233492 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8ldz\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-kube-api-access-v8ldz\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233510 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233524 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233548 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233562 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/176a0205-a131-4510-bcf5-420945c4c6ee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233576 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233588 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233600 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/176a0205-a131-4510-bcf5-420945c4c6ee-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.233613 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.235532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.236076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.236739 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-config-data\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.237761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/22b25319-9d84-42f2-b5ed-127c06f29bbb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.237886 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.242039 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.243287 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.246254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.251885 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/22b25319-9d84-42f2-b5ed-127c06f29bbb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.252004 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/22b25319-9d84-42f2-b5ed-127c06f29bbb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.259415 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf" (OuterVolumeSpecName: "server-conf") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.261610 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w874f\" (UniqueName: \"kubernetes.io/projected/22b25319-9d84-42f2-b5ed-127c06f29bbb-kube-api-access-w874f\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.287719 4770 generic.go:334] "Generic (PLEG): container finished" podID="176a0205-a131-4510-bcf5-420945c4c6ee" containerID="73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba" exitCode=0 Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.287812 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerDied","Data":"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba"} Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.287887 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"176a0205-a131-4510-bcf5-420945c4c6ee","Type":"ContainerDied","Data":"3e53adaec4c0db3bc794e5187e52e4216b957b357539a42fcd690cf59579c327"} Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.287907 4770 scope.go:117] "RemoveContainer" containerID="73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.288725 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.291894 4770 generic.go:334] "Generic (PLEG): container finished" podID="46da2484-1760-43e3-b423-634fd2f24b24" containerID="5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3" exitCode=0 Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.292075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerDied","Data":"5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3"} Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.313903 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.330585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"22b25319-9d84-42f2-b5ed-127c06f29bbb\") " pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.343519 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/176a0205-a131-4510-bcf5-420945c4c6ee-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.343554 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.348919 4770 scope.go:117] "RemoveContainer" containerID="fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.350274 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "176a0205-a131-4510-bcf5-420945c4c6ee" (UID: "176a0205-a131-4510-bcf5-420945c4c6ee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.390835 4770 scope.go:117] "RemoveContainer" containerID="73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba" Jan 26 19:05:47 crc kubenswrapper[4770]: E0126 19:05:47.391434 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba\": container with ID starting with 73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba not found: ID does not exist" containerID="73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.391494 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba"} err="failed to get container status \"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba\": rpc error: code = NotFound desc = could not find container \"73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba\": container with ID starting with 73446f831d8f43acd5147b021c3d4e94c394a6b85189e48458d3183d687a43ba not found: ID does not exist" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.391529 4770 scope.go:117] "RemoveContainer" containerID="fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454" Jan 26 19:05:47 crc kubenswrapper[4770]: E0126 19:05:47.392270 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454\": container with ID starting with fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454 not found: ID does not exist" containerID="fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.392304 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454"} err="failed to get container status \"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454\": rpc error: code = NotFound desc = could not find container \"fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454\": container with ID starting with fa779e5dcfa2b3e075aa0ed2aff79b60f01d4b4efc37b98e633258d5327b0454 not found: ID does not exist" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.445399 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/176a0205-a131-4510-bcf5-420945c4c6ee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.622628 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.629116 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.634026 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.673455 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:47 crc kubenswrapper[4770]: E0126 19:05:47.674406 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="rabbitmq" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.674428 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="rabbitmq" Jan 26 19:05:47 crc kubenswrapper[4770]: E0126 19:05:47.674457 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="setup-container" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.674466 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="setup-container" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.674753 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" containerName="rabbitmq" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.676149 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.683804 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.684211 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sm5gm" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.685393 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.685574 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.685725 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.686135 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.686295 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.700382 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.787522 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176a0205-a131-4510-bcf5-420945c4c6ee" path="/var/lib/kubelet/pods/176a0205-a131-4510-bcf5-420945c4c6ee/volumes" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.794589 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876c1ba4-ebd2-47b9-80d0-5158053c4fb8" path="/var/lib/kubelet/pods/876c1ba4-ebd2-47b9-80d0-5158053c4fb8/volumes" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852582 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852778 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkm6t\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-kube-api-access-hkm6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852795 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/276b57ae-3637-49f3-a25c-9e8d7fc369ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852825 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852912 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/276b57ae-3637-49f3-a25c-9e8d7fc369ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.852951 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.954964 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/276b57ae-3637-49f3-a25c-9e8d7fc369ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955083 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955192 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkm6t\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-kube-api-access-hkm6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955255 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/276b57ae-3637-49f3-a25c-9e8d7fc369ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955283 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.955641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.956869 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.956923 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.956985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.957069 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.957544 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/276b57ae-3637-49f3-a25c-9e8d7fc369ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.960397 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/276b57ae-3637-49f3-a25c-9e8d7fc369ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.961276 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.964563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.973252 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/276b57ae-3637-49f3-a25c-9e8d7fc369ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.974306 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkm6t\" (UniqueName: \"kubernetes.io/projected/276b57ae-3637-49f3-a25c-9e8d7fc369ba-kube-api-access-hkm6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:47 crc kubenswrapper[4770]: I0126 19:05:47.998659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"276b57ae-3637-49f3-a25c-9e8d7fc369ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:48 crc kubenswrapper[4770]: I0126 19:05:48.100432 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:05:48 crc kubenswrapper[4770]: W0126 19:05:48.173935 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22b25319_9d84_42f2_b5ed_127c06f29bbb.slice/crio-b0e5dfe1a1938b72bf3a90bc598ef347e08e61160e1abd50f5147033d26dc9d6 WatchSource:0}: Error finding container b0e5dfe1a1938b72bf3a90bc598ef347e08e61160e1abd50f5147033d26dc9d6: Status 404 returned error can't find the container with id b0e5dfe1a1938b72bf3a90bc598ef347e08e61160e1abd50f5147033d26dc9d6 Jan 26 19:05:48 crc kubenswrapper[4770]: I0126 19:05:48.175725 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 19:05:48 crc kubenswrapper[4770]: I0126 19:05:48.318063 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22b25319-9d84-42f2-b5ed-127c06f29bbb","Type":"ContainerStarted","Data":"b0e5dfe1a1938b72bf3a90bc598ef347e08e61160e1abd50f5147033d26dc9d6"} Jan 26 19:05:48 crc kubenswrapper[4770]: I0126 19:05:48.643098 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 19:05:48 crc kubenswrapper[4770]: W0126 19:05:48.655017 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod276b57ae_3637_49f3_a25c_9e8d7fc369ba.slice/crio-289f8895750ebb528e4b192640145b46a68d1df25a542cc231d05a98b8348b61 WatchSource:0}: Error finding container 289f8895750ebb528e4b192640145b46a68d1df25a542cc231d05a98b8348b61: Status 404 returned error can't find the container with id 289f8895750ebb528e4b192640145b46a68d1df25a542cc231d05a98b8348b61 Jan 26 19:05:49 crc kubenswrapper[4770]: I0126 19:05:49.330887 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerStarted","Data":"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8"} Jan 26 19:05:49 crc kubenswrapper[4770]: I0126 19:05:49.333370 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"276b57ae-3637-49f3-a25c-9e8d7fc369ba","Type":"ContainerStarted","Data":"289f8895750ebb528e4b192640145b46a68d1df25a542cc231d05a98b8348b61"} Jan 26 19:05:51 crc kubenswrapper[4770]: E0126 19:05:51.084255 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46da2484_1760_43e3_b423_634fd2f24b24.slice/crio-conmon-230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 19:05:51 crc kubenswrapper[4770]: I0126 19:05:51.356479 4770 generic.go:334] "Generic (PLEG): container finished" podID="46da2484-1760-43e3-b423-634fd2f24b24" containerID="230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8" exitCode=0 Jan 26 19:05:51 crc kubenswrapper[4770]: I0126 19:05:51.356540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerDied","Data":"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8"} Jan 26 19:05:51 crc kubenswrapper[4770]: I0126 19:05:51.358975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22b25319-9d84-42f2-b5ed-127c06f29bbb","Type":"ContainerStarted","Data":"1ed61fbcf283c7be73523241c4aeed7a2a9ee515467cbe423b595dcd0c69daac"} Jan 26 19:05:51 crc kubenswrapper[4770]: I0126 19:05:51.362038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"276b57ae-3637-49f3-a25c-9e8d7fc369ba","Type":"ContainerStarted","Data":"0fd3a72b68692f57741ab546a404d818d85bdd41fa28813e2c85e4db41575dd2"} Jan 26 19:05:53 crc kubenswrapper[4770]: I0126 19:05:53.417048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerStarted","Data":"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8"} Jan 26 19:05:53 crc kubenswrapper[4770]: I0126 19:05:53.444866 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-smmrh" podStartSLOduration=3.307594066 podStartE2EDuration="8.444844137s" podCreationTimestamp="2026-01-26 19:05:45 +0000 UTC" firstStartedPulling="2026-01-26 19:05:47.295328221 +0000 UTC m=+1431.860234953" lastFinishedPulling="2026-01-26 19:05:52.432578292 +0000 UTC m=+1436.997485024" observedRunningTime="2026-01-26 19:05:53.435953212 +0000 UTC m=+1438.000859944" watchObservedRunningTime="2026-01-26 19:05:53.444844137 +0000 UTC m=+1438.009750869" Jan 26 19:05:55 crc kubenswrapper[4770]: I0126 19:05:55.524046 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:55 crc kubenswrapper[4770]: I0126 19:05:55.524521 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.415147 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.417519 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.420171 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.530118 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557188 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvr9\" (UniqueName: \"kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557334 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557350 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557396 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.557451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.578027 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-smmrh" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="registry-server" probeResult="failure" output=< Jan 26 19:05:56 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 19:05:56 crc kubenswrapper[4770]: > Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.659204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.659473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.659662 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.659818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvr9\" (UniqueName: \"kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.660032 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.660120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.660250 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.661258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.661270 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.661312 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.661497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.661827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.662358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.692190 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvr9\" (UniqueName: \"kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9\") pod \"dnsmasq-dns-674c57545f-gh46h\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:56 crc kubenswrapper[4770]: I0126 19:05:56.754825 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:57 crc kubenswrapper[4770]: W0126 19:05:57.274352 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4702de79_babd_47eb_9c34_cac0efbfb08d.slice/crio-2a67600f6f061216f9ce4598cbda0e4dd2c5ee08c9ecec88af2944b53198adce WatchSource:0}: Error finding container 2a67600f6f061216f9ce4598cbda0e4dd2c5ee08c9ecec88af2944b53198adce: Status 404 returned error can't find the container with id 2a67600f6f061216f9ce4598cbda0e4dd2c5ee08c9ecec88af2944b53198adce Jan 26 19:05:57 crc kubenswrapper[4770]: I0126 19:05:57.278228 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:05:57 crc kubenswrapper[4770]: I0126 19:05:57.490370 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c57545f-gh46h" event={"ID":"4702de79-babd-47eb-9c34-cac0efbfb08d","Type":"ContainerStarted","Data":"2a67600f6f061216f9ce4598cbda0e4dd2c5ee08c9ecec88af2944b53198adce"} Jan 26 19:05:58 crc kubenswrapper[4770]: I0126 19:05:58.503996 4770 generic.go:334] "Generic (PLEG): container finished" podID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerID="c5915d346c243900d047e24cfc0d07b4cabfdc976088dce8596a171e270c79d6" exitCode=0 Jan 26 19:05:58 crc kubenswrapper[4770]: I0126 19:05:58.504087 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c57545f-gh46h" event={"ID":"4702de79-babd-47eb-9c34-cac0efbfb08d","Type":"ContainerDied","Data":"c5915d346c243900d047e24cfc0d07b4cabfdc976088dce8596a171e270c79d6"} Jan 26 19:05:59 crc kubenswrapper[4770]: I0126 19:05:59.520647 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c57545f-gh46h" event={"ID":"4702de79-babd-47eb-9c34-cac0efbfb08d","Type":"ContainerStarted","Data":"17c98a31e93da52937c3ce6b3182b90bda2c5407309f4a1eff8bff1681f13a34"} Jan 26 19:05:59 crc kubenswrapper[4770]: I0126 19:05:59.522811 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:05:59 crc kubenswrapper[4770]: I0126 19:05:59.559296 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674c57545f-gh46h" podStartSLOduration=3.559266354 podStartE2EDuration="3.559266354s" podCreationTimestamp="2026-01-26 19:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:05:59.546015207 +0000 UTC m=+1444.110921959" watchObservedRunningTime="2026-01-26 19:05:59.559266354 +0000 UTC m=+1444.124173116" Jan 26 19:06:05 crc kubenswrapper[4770]: I0126 19:06:05.589340 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:06:05 crc kubenswrapper[4770]: I0126 19:06:05.661518 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:06:05 crc kubenswrapper[4770]: I0126 19:06:05.835061 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:06:06 crc kubenswrapper[4770]: I0126 19:06:06.756924 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:06:06 crc kubenswrapper[4770]: I0126 19:06:06.847954 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:06:06 crc kubenswrapper[4770]: I0126 19:06:06.848210 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-898947885-9fsdq" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="dnsmasq-dns" containerID="cri-o://7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9" gracePeriod=10 Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:06.998362 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cbc8554f7-j54ps"] Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.009795 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.015123 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cbc8554f7-j54ps"] Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099605 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-config\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099667 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-svc\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkmg\" (UniqueName: \"kubernetes.io/projected/5bf0f8d0-3821-4f2d-98d5-eeb869043350-kube-api-access-bhkmg\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099880 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-nb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099904 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-sb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099933 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-openstack-edpm-ipam\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.099970 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-swift-storage-0\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.201821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkmg\" (UniqueName: \"kubernetes.io/projected/5bf0f8d0-3821-4f2d-98d5-eeb869043350-kube-api-access-bhkmg\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.201920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-nb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.201947 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-sb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.201975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-openstack-edpm-ipam\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.202009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-swift-storage-0\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.202033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-config\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.202048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-svc\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.202927 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-svc\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.203150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-openstack-edpm-ipam\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.206806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-dns-swift-storage-0\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.209513 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-nb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.209800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-config\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.210270 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bf0f8d0-3821-4f2d-98d5-eeb869043350-ovsdbserver-sb\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.225940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkmg\" (UniqueName: \"kubernetes.io/projected/5bf0f8d0-3821-4f2d-98d5-eeb869043350-kube-api-access-bhkmg\") pod \"dnsmasq-dns-7cbc8554f7-j54ps\" (UID: \"5bf0f8d0-3821-4f2d-98d5-eeb869043350\") " pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.353235 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.463597 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508041 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508257 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508333 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508391 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grcvz\" (UniqueName: \"kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.508568 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb\") pod \"039e5aac-4654-43b3-aa42-710246c88b00\" (UID: \"039e5aac-4654-43b3-aa42-710246c88b00\") " Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.527197 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz" (OuterVolumeSpecName: "kube-api-access-grcvz") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "kube-api-access-grcvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.583894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.584855 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config" (OuterVolumeSpecName: "config") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.588152 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.604392 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.606564 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "039e5aac-4654-43b3-aa42-710246c88b00" (UID: "039e5aac-4654-43b3-aa42-710246c88b00"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612847 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612877 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612886 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612894 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612902 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grcvz\" (UniqueName: \"kubernetes.io/projected/039e5aac-4654-43b3-aa42-710246c88b00-kube-api-access-grcvz\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.612912 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/039e5aac-4654-43b3-aa42-710246c88b00-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.614459 4770 generic.go:334] "Generic (PLEG): container finished" podID="039e5aac-4654-43b3-aa42-710246c88b00" containerID="7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9" exitCode=0 Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.614647 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-smmrh" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="registry-server" containerID="cri-o://3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8" gracePeriod=2 Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.614957 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-898947885-9fsdq" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.619972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-898947885-9fsdq" event={"ID":"039e5aac-4654-43b3-aa42-710246c88b00","Type":"ContainerDied","Data":"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9"} Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.620012 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-898947885-9fsdq" event={"ID":"039e5aac-4654-43b3-aa42-710246c88b00","Type":"ContainerDied","Data":"2baf94c3c733fe7de49985e67db543e6fa11ee3398b510477241348cee97424e"} Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.620030 4770 scope.go:117] "RemoveContainer" containerID="7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.651961 4770 scope.go:117] "RemoveContainer" containerID="85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.655484 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.664545 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-898947885-9fsdq"] Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.688944 4770 scope.go:117] "RemoveContainer" containerID="7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9" Jan 26 19:06:07 crc kubenswrapper[4770]: E0126 19:06:07.689468 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9\": container with ID starting with 7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9 not found: ID does not exist" containerID="7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.689540 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9"} err="failed to get container status \"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9\": rpc error: code = NotFound desc = could not find container \"7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9\": container with ID starting with 7cd868638a923a24c87687360b6dc9d34fc5a20219e6a423d1147794946f7af9 not found: ID does not exist" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.689578 4770 scope.go:117] "RemoveContainer" containerID="85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574" Jan 26 19:06:07 crc kubenswrapper[4770]: E0126 19:06:07.689932 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574\": container with ID starting with 85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574 not found: ID does not exist" containerID="85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.689962 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574"} err="failed to get container status \"85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574\": rpc error: code = NotFound desc = could not find container \"85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574\": container with ID starting with 85e29369dae198a9f5b8f19fc60a0482f95b84b68e1938fc4bafeb97d887c574 not found: ID does not exist" Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.791002 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="039e5aac-4654-43b3-aa42-710246c88b00" path="/var/lib/kubelet/pods/039e5aac-4654-43b3-aa42-710246c88b00/volumes" Jan 26 19:06:07 crc kubenswrapper[4770]: W0126 19:06:07.900081 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf0f8d0_3821_4f2d_98d5_eeb869043350.slice/crio-612f770e6a27f55967dca8c4a9193fbf02d81c5f7ca0a3683a95e41596349000 WatchSource:0}: Error finding container 612f770e6a27f55967dca8c4a9193fbf02d81c5f7ca0a3683a95e41596349000: Status 404 returned error can't find the container with id 612f770e6a27f55967dca8c4a9193fbf02d81c5f7ca0a3683a95e41596349000 Jan 26 19:06:07 crc kubenswrapper[4770]: I0126 19:06:07.901336 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cbc8554f7-j54ps"] Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.175957 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.222587 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qhmv\" (UniqueName: \"kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv\") pod \"46da2484-1760-43e3-b423-634fd2f24b24\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.222899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities\") pod \"46da2484-1760-43e3-b423-634fd2f24b24\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.223030 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content\") pod \"46da2484-1760-43e3-b423-634fd2f24b24\" (UID: \"46da2484-1760-43e3-b423-634fd2f24b24\") " Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.223744 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities" (OuterVolumeSpecName: "utilities") pod "46da2484-1760-43e3-b423-634fd2f24b24" (UID: "46da2484-1760-43e3-b423-634fd2f24b24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.238455 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv" (OuterVolumeSpecName: "kube-api-access-4qhmv") pod "46da2484-1760-43e3-b423-634fd2f24b24" (UID: "46da2484-1760-43e3-b423-634fd2f24b24"). InnerVolumeSpecName "kube-api-access-4qhmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.325345 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qhmv\" (UniqueName: \"kubernetes.io/projected/46da2484-1760-43e3-b423-634fd2f24b24-kube-api-access-4qhmv\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.325403 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.404407 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46da2484-1760-43e3-b423-634fd2f24b24" (UID: "46da2484-1760-43e3-b423-634fd2f24b24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.427246 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46da2484-1760-43e3-b423-634fd2f24b24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.625558 4770 generic.go:334] "Generic (PLEG): container finished" podID="5bf0f8d0-3821-4f2d-98d5-eeb869043350" containerID="e74119d6a925d1aa41ac17d12e56ed18a418bf8cd5ddec64882fb558d286db31" exitCode=0 Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.625603 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" event={"ID":"5bf0f8d0-3821-4f2d-98d5-eeb869043350","Type":"ContainerDied","Data":"e74119d6a925d1aa41ac17d12e56ed18a418bf8cd5ddec64882fb558d286db31"} Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.625640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" event={"ID":"5bf0f8d0-3821-4f2d-98d5-eeb869043350","Type":"ContainerStarted","Data":"612f770e6a27f55967dca8c4a9193fbf02d81c5f7ca0a3683a95e41596349000"} Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.628583 4770 generic.go:334] "Generic (PLEG): container finished" podID="46da2484-1760-43e3-b423-634fd2f24b24" containerID="3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8" exitCode=0 Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.628710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerDied","Data":"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8"} Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.628805 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-smmrh" event={"ID":"46da2484-1760-43e3-b423-634fd2f24b24","Type":"ContainerDied","Data":"8dc98b1231eb4e23a992dd80d63dd340a44fdf2a63b7558e196151f03a086409"} Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.628820 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-smmrh" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.628830 4770 scope.go:117] "RemoveContainer" containerID="3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.801798 4770 scope.go:117] "RemoveContainer" containerID="230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.815588 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.824704 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-smmrh"] Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.834729 4770 scope.go:117] "RemoveContainer" containerID="5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.862394 4770 scope.go:117] "RemoveContainer" containerID="3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8" Jan 26 19:06:08 crc kubenswrapper[4770]: E0126 19:06:08.862905 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8\": container with ID starting with 3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8 not found: ID does not exist" containerID="3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.862953 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8"} err="failed to get container status \"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8\": rpc error: code = NotFound desc = could not find container \"3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8\": container with ID starting with 3fd1173908b7cafddf6e4a7a20af4f2ea14b641f37755599188e7ee946e063c8 not found: ID does not exist" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.862982 4770 scope.go:117] "RemoveContainer" containerID="230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8" Jan 26 19:06:08 crc kubenswrapper[4770]: E0126 19:06:08.863400 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8\": container with ID starting with 230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8 not found: ID does not exist" containerID="230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.863467 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8"} err="failed to get container status \"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8\": rpc error: code = NotFound desc = could not find container \"230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8\": container with ID starting with 230a00bb1ccc4b1cc8f54a84172c48d358efc790b3c2b59da9443f70bf2f04f8 not found: ID does not exist" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.863513 4770 scope.go:117] "RemoveContainer" containerID="5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3" Jan 26 19:06:08 crc kubenswrapper[4770]: E0126 19:06:08.864127 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3\": container with ID starting with 5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3 not found: ID does not exist" containerID="5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3" Jan 26 19:06:08 crc kubenswrapper[4770]: I0126 19:06:08.864161 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3"} err="failed to get container status \"5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3\": rpc error: code = NotFound desc = could not find container \"5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3\": container with ID starting with 5c3c01285e33ad1ea8b3509bf9f1e719c65727458b9f5b0fd86dd165a42236d3 not found: ID does not exist" Jan 26 19:06:09 crc kubenswrapper[4770]: I0126 19:06:09.651154 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" event={"ID":"5bf0f8d0-3821-4f2d-98d5-eeb869043350","Type":"ContainerStarted","Data":"8ac9fc6a2b5c6ac8745527d6c59cee814af3e066d90ecd9e46e8f09b3f8e9493"} Jan 26 19:06:09 crc kubenswrapper[4770]: I0126 19:06:09.651769 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:09 crc kubenswrapper[4770]: I0126 19:06:09.679396 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" podStartSLOduration=3.679376373 podStartE2EDuration="3.679376373s" podCreationTimestamp="2026-01-26 19:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:06:09.672096662 +0000 UTC m=+1454.237003434" watchObservedRunningTime="2026-01-26 19:06:09.679376373 +0000 UTC m=+1454.244283105" Jan 26 19:06:09 crc kubenswrapper[4770]: I0126 19:06:09.788348 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46da2484-1760-43e3-b423-634fd2f24b24" path="/var/lib/kubelet/pods/46da2484-1760-43e3-b423-634fd2f24b24/volumes" Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.356963 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cbc8554f7-j54ps" Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.453524 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.453903 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674c57545f-gh46h" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="dnsmasq-dns" containerID="cri-o://17c98a31e93da52937c3ce6b3182b90bda2c5407309f4a1eff8bff1681f13a34" gracePeriod=10 Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.751280 4770 generic.go:334] "Generic (PLEG): container finished" podID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerID="17c98a31e93da52937c3ce6b3182b90bda2c5407309f4a1eff8bff1681f13a34" exitCode=0 Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.751363 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c57545f-gh46h" event={"ID":"4702de79-babd-47eb-9c34-cac0efbfb08d","Type":"ContainerDied","Data":"17c98a31e93da52937c3ce6b3182b90bda2c5407309f4a1eff8bff1681f13a34"} Jan 26 19:06:17 crc kubenswrapper[4770]: I0126 19:06:17.967155 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.147631 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.147748 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.147800 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.147898 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.147955 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.148065 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.148094 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbvr9\" (UniqueName: \"kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9\") pod \"4702de79-babd-47eb-9c34-cac0efbfb08d\" (UID: \"4702de79-babd-47eb-9c34-cac0efbfb08d\") " Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.157384 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9" (OuterVolumeSpecName: "kube-api-access-fbvr9") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "kube-api-access-fbvr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.229480 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.236823 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.240826 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.241041 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config" (OuterVolumeSpecName: "config") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.247408 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250848 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250894 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbvr9\" (UniqueName: \"kubernetes.io/projected/4702de79-babd-47eb-9c34-cac0efbfb08d-kube-api-access-fbvr9\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250912 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250928 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250945 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.250956 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.255811 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4702de79-babd-47eb-9c34-cac0efbfb08d" (UID: "4702de79-babd-47eb-9c34-cac0efbfb08d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.352214 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4702de79-babd-47eb-9c34-cac0efbfb08d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.763900 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c57545f-gh46h" event={"ID":"4702de79-babd-47eb-9c34-cac0efbfb08d","Type":"ContainerDied","Data":"2a67600f6f061216f9ce4598cbda0e4dd2c5ee08c9ecec88af2944b53198adce"} Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.763997 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c57545f-gh46h" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.764268 4770 scope.go:117] "RemoveContainer" containerID="17c98a31e93da52937c3ce6b3182b90bda2c5407309f4a1eff8bff1681f13a34" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.804079 4770 scope.go:117] "RemoveContainer" containerID="c5915d346c243900d047e24cfc0d07b4cabfdc976088dce8596a171e270c79d6" Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.818292 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:06:18 crc kubenswrapper[4770]: I0126 19:06:18.832794 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674c57545f-gh46h"] Jan 26 19:06:19 crc kubenswrapper[4770]: I0126 19:06:19.791886 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" path="/var/lib/kubelet/pods/4702de79-babd-47eb-9c34-cac0efbfb08d/volumes" Jan 26 19:06:23 crc kubenswrapper[4770]: I0126 19:06:23.825120 4770 generic.go:334] "Generic (PLEG): container finished" podID="22b25319-9d84-42f2-b5ed-127c06f29bbb" containerID="1ed61fbcf283c7be73523241c4aeed7a2a9ee515467cbe423b595dcd0c69daac" exitCode=0 Jan 26 19:06:23 crc kubenswrapper[4770]: I0126 19:06:23.825269 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22b25319-9d84-42f2-b5ed-127c06f29bbb","Type":"ContainerDied","Data":"1ed61fbcf283c7be73523241c4aeed7a2a9ee515467cbe423b595dcd0c69daac"} Jan 26 19:06:23 crc kubenswrapper[4770]: I0126 19:06:23.828632 4770 generic.go:334] "Generic (PLEG): container finished" podID="276b57ae-3637-49f3-a25c-9e8d7fc369ba" containerID="0fd3a72b68692f57741ab546a404d818d85bdd41fa28813e2c85e4db41575dd2" exitCode=0 Jan 26 19:06:23 crc kubenswrapper[4770]: I0126 19:06:23.828680 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"276b57ae-3637-49f3-a25c-9e8d7fc369ba","Type":"ContainerDied","Data":"0fd3a72b68692f57741ab546a404d818d85bdd41fa28813e2c85e4db41575dd2"} Jan 26 19:06:25 crc kubenswrapper[4770]: I0126 19:06:25.861888 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"22b25319-9d84-42f2-b5ed-127c06f29bbb","Type":"ContainerStarted","Data":"44d7566820bd4d5951a4366065fbb187bf8b9decf55dc23f0a7a3d0c702a22bb"} Jan 26 19:06:25 crc kubenswrapper[4770]: I0126 19:06:25.865013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"276b57ae-3637-49f3-a25c-9e8d7fc369ba","Type":"ContainerStarted","Data":"2b5cbf0712dccc55a5bf510eb611e039b97e7c3b8bc32b400ce8da1f4960c0c0"} Jan 26 19:06:26 crc kubenswrapper[4770]: I0126 19:06:26.878529 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:06:26 crc kubenswrapper[4770]: I0126 19:06:26.878999 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 19:06:26 crc kubenswrapper[4770]: I0126 19:06:26.901138 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.901123497 podStartE2EDuration="40.901123497s" podCreationTimestamp="2026-01-26 19:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:06:26.898967637 +0000 UTC m=+1471.463874359" watchObservedRunningTime="2026-01-26 19:06:26.901123497 +0000 UTC m=+1471.466030229" Jan 26 19:06:26 crc kubenswrapper[4770]: I0126 19:06:26.925194 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.925176261 podStartE2EDuration="39.925176261s" podCreationTimestamp="2026-01-26 19:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:06:26.922846087 +0000 UTC m=+1471.487752839" watchObservedRunningTime="2026-01-26 19:06:26.925176261 +0000 UTC m=+1471.490083013" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.548315 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k"] Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549208 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549221 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549239 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="init" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549245 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="init" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549265 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="registry-server" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549271 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="registry-server" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549280 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="extract-content" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549286 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="extract-content" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549297 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="init" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549302 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="init" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549317 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="extract-utilities" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549323 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="extract-utilities" Jan 26 19:06:35 crc kubenswrapper[4770]: E0126 19:06:35.549343 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.549351 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.566012 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="039e5aac-4654-43b3-aa42-710246c88b00" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.566080 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4702de79-babd-47eb-9c34-cac0efbfb08d" containerName="dnsmasq-dns" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.566093 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46da2484-1760-43e3-b423-634fd2f24b24" containerName="registry-server" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.567097 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.570392 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.570831 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.571132 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.571302 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.612938 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k"] Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.635263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.635321 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7l49\" (UniqueName: \"kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.635622 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.635852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.737075 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.737147 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7l49\" (UniqueName: \"kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.737231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.737312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.742598 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.742991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.743219 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.760516 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7l49\" (UniqueName: \"kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:35 crc kubenswrapper[4770]: I0126 19:06:35.917687 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:06:36 crc kubenswrapper[4770]: I0126 19:06:36.777887 4770 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bvh46 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 19:06:36 crc kubenswrapper[4770]: I0126 19:06:36.777914 4770 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-bvh46 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 19:06:36 crc kubenswrapper[4770]: I0126 19:06:36.778209 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" podUID="4f522286-ca46-4767-8813-5d5079d1d108" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:06:36 crc kubenswrapper[4770]: I0126 19:06:36.778154 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bvh46" podUID="4f522286-ca46-4767-8813-5d5079d1d108" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 19:06:37 crc kubenswrapper[4770]: I0126 19:06:37.595638 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k"] Jan 26 19:06:37 crc kubenswrapper[4770]: I0126 19:06:37.631604 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="22b25319-9d84-42f2-b5ed-127c06f29bbb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.227:5671: connect: connection refused" Jan 26 19:06:38 crc kubenswrapper[4770]: I0126 19:06:38.001843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" event={"ID":"4ae332ee-80e2-4c02-a235-a318900f5ab4","Type":"ContainerStarted","Data":"88bbbec480661d5c0c15c6cf5d9151c375ee663b30588522c67db163921996d9"} Jan 26 19:06:38 crc kubenswrapper[4770]: I0126 19:06:38.102422 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="276b57ae-3637-49f3-a25c-9e8d7fc369ba" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.228:5671: connect: connection refused" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.239025 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.244225 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.257497 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.257610 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fg7b\" (UniqueName: \"kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.257673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.271756 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.359495 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.359607 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.359690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fg7b\" (UniqueName: \"kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.361651 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.362154 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.382518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fg7b\" (UniqueName: \"kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b\") pod \"certified-operators-hvf2g\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:41 crc kubenswrapper[4770]: I0126 19:06:41.588277 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.876866 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.880138 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.888329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.899758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.899834 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:46 crc kubenswrapper[4770]: I0126 19:06:46.899916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvbd\" (UniqueName: \"kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.001969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.002090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvbd\" (UniqueName: \"kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.002167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.002607 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.002678 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.037972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvbd\" (UniqueName: \"kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd\") pod \"redhat-marketplace-2zzv6\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.206362 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:47 crc kubenswrapper[4770]: I0126 19:06:47.631901 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 19:06:48 crc kubenswrapper[4770]: I0126 19:06:48.102954 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 19:06:48 crc kubenswrapper[4770]: I0126 19:06:48.114589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" event={"ID":"4ae332ee-80e2-4c02-a235-a318900f5ab4","Type":"ContainerStarted","Data":"be1f593fb09b8db3272b905939f4e71b2382b56b431c80571db6ba4c03847e17"} Jan 26 19:06:48 crc kubenswrapper[4770]: I0126 19:06:48.179403 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" podStartSLOduration=3.007583716 podStartE2EDuration="13.17937629s" podCreationTimestamp="2026-01-26 19:06:35 +0000 UTC" firstStartedPulling="2026-01-26 19:06:37.602158859 +0000 UTC m=+1482.167065591" lastFinishedPulling="2026-01-26 19:06:47.773951423 +0000 UTC m=+1492.338858165" observedRunningTime="2026-01-26 19:06:48.172413808 +0000 UTC m=+1492.737320560" watchObservedRunningTime="2026-01-26 19:06:48.17937629 +0000 UTC m=+1492.744283022" Jan 26 19:06:48 crc kubenswrapper[4770]: I0126 19:06:48.242297 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:06:48 crc kubenswrapper[4770]: I0126 19:06:48.301469 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.129864 4770 generic.go:334] "Generic (PLEG): container finished" podID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerID="e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f" exitCode=0 Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.129924 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerDied","Data":"e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f"} Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.130259 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerStarted","Data":"383a9aee1179f75eef6eea3d62623dd3ab98ea5e456e7ee0ddd86413c9187485"} Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.132342 4770 generic.go:334] "Generic (PLEG): container finished" podID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerID="fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4" exitCode=0 Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.132435 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerDied","Data":"fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4"} Jan 26 19:06:49 crc kubenswrapper[4770]: I0126 19:06:49.132505 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerStarted","Data":"ce79a87fa1fd37ee5d484d30c1c6b2e81c009d7cddd88736ee1c47933e2bacc4"} Jan 26 19:06:50 crc kubenswrapper[4770]: I0126 19:06:50.141677 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerStarted","Data":"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1"} Jan 26 19:06:50 crc kubenswrapper[4770]: I0126 19:06:50.145746 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerStarted","Data":"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5"} Jan 26 19:06:51 crc kubenswrapper[4770]: I0126 19:06:51.160808 4770 generic.go:334] "Generic (PLEG): container finished" podID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerID="67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1" exitCode=0 Jan 26 19:06:51 crc kubenswrapper[4770]: I0126 19:06:51.160928 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerDied","Data":"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1"} Jan 26 19:06:51 crc kubenswrapper[4770]: I0126 19:06:51.165099 4770 generic.go:334] "Generic (PLEG): container finished" podID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerID="8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5" exitCode=0 Jan 26 19:06:51 crc kubenswrapper[4770]: I0126 19:06:51.165178 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerDied","Data":"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5"} Jan 26 19:06:53 crc kubenswrapper[4770]: I0126 19:06:53.203478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerStarted","Data":"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770"} Jan 26 19:06:53 crc kubenswrapper[4770]: I0126 19:06:53.207774 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerStarted","Data":"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0"} Jan 26 19:06:53 crc kubenswrapper[4770]: I0126 19:06:53.236283 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hvf2g" podStartSLOduration=9.371762395 podStartE2EDuration="12.236262292s" podCreationTimestamp="2026-01-26 19:06:41 +0000 UTC" firstStartedPulling="2026-01-26 19:06:49.134302211 +0000 UTC m=+1493.699208933" lastFinishedPulling="2026-01-26 19:06:51.998802058 +0000 UTC m=+1496.563708830" observedRunningTime="2026-01-26 19:06:53.228936659 +0000 UTC m=+1497.793843401" watchObservedRunningTime="2026-01-26 19:06:53.236262292 +0000 UTC m=+1497.801169034" Jan 26 19:06:53 crc kubenswrapper[4770]: I0126 19:06:53.266605 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zzv6" podStartSLOduration=4.408358544 podStartE2EDuration="7.266589498s" podCreationTimestamp="2026-01-26 19:06:46 +0000 UTC" firstStartedPulling="2026-01-26 19:06:49.132159552 +0000 UTC m=+1493.697066304" lastFinishedPulling="2026-01-26 19:06:51.990390506 +0000 UTC m=+1496.555297258" observedRunningTime="2026-01-26 19:06:53.255106372 +0000 UTC m=+1497.820013124" watchObservedRunningTime="2026-01-26 19:06:53.266589498 +0000 UTC m=+1497.831496230" Jan 26 19:06:57 crc kubenswrapper[4770]: I0126 19:06:57.207441 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:57 crc kubenswrapper[4770]: I0126 19:06:57.208365 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:57 crc kubenswrapper[4770]: I0126 19:06:57.258286 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:57 crc kubenswrapper[4770]: I0126 19:06:57.342117 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:58 crc kubenswrapper[4770]: I0126 19:06:58.251307 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.270194 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zzv6" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="registry-server" containerID="cri-o://5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0" gracePeriod=2 Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.836620 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.980203 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmvbd\" (UniqueName: \"kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd\") pod \"dde3cb38-99db-4fea-97a7-347da0271f6d\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.980389 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content\") pod \"dde3cb38-99db-4fea-97a7-347da0271f6d\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.982032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities\") pod \"dde3cb38-99db-4fea-97a7-347da0271f6d\" (UID: \"dde3cb38-99db-4fea-97a7-347da0271f6d\") " Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.983938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities" (OuterVolumeSpecName: "utilities") pod "dde3cb38-99db-4fea-97a7-347da0271f6d" (UID: "dde3cb38-99db-4fea-97a7-347da0271f6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.984191 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:06:59 crc kubenswrapper[4770]: I0126 19:06:59.986541 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd" (OuterVolumeSpecName: "kube-api-access-zmvbd") pod "dde3cb38-99db-4fea-97a7-347da0271f6d" (UID: "dde3cb38-99db-4fea-97a7-347da0271f6d"). InnerVolumeSpecName "kube-api-access-zmvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.005979 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dde3cb38-99db-4fea-97a7-347da0271f6d" (UID: "dde3cb38-99db-4fea-97a7-347da0271f6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.086539 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmvbd\" (UniqueName: \"kubernetes.io/projected/dde3cb38-99db-4fea-97a7-347da0271f6d-kube-api-access-zmvbd\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.086576 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dde3cb38-99db-4fea-97a7-347da0271f6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.282032 4770 generic.go:334] "Generic (PLEG): container finished" podID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerID="5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0" exitCode=0 Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.282068 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zzv6" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.282092 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerDied","Data":"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0"} Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.282117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zzv6" event={"ID":"dde3cb38-99db-4fea-97a7-347da0271f6d","Type":"ContainerDied","Data":"383a9aee1179f75eef6eea3d62623dd3ab98ea5e456e7ee0ddd86413c9187485"} Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.282135 4770 scope.go:117] "RemoveContainer" containerID="5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.285435 4770 generic.go:334] "Generic (PLEG): container finished" podID="4ae332ee-80e2-4c02-a235-a318900f5ab4" containerID="be1f593fb09b8db3272b905939f4e71b2382b56b431c80571db6ba4c03847e17" exitCode=0 Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.285459 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" event={"ID":"4ae332ee-80e2-4c02-a235-a318900f5ab4","Type":"ContainerDied","Data":"be1f593fb09b8db3272b905939f4e71b2382b56b431c80571db6ba4c03847e17"} Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.318082 4770 scope.go:117] "RemoveContainer" containerID="67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.330982 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.331114 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.333358 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.342909 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zzv6"] Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.350478 4770 scope.go:117] "RemoveContainer" containerID="e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.398010 4770 scope.go:117] "RemoveContainer" containerID="5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0" Jan 26 19:07:00 crc kubenswrapper[4770]: E0126 19:07:00.398472 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0\": container with ID starting with 5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0 not found: ID does not exist" containerID="5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.398515 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0"} err="failed to get container status \"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0\": rpc error: code = NotFound desc = could not find container \"5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0\": container with ID starting with 5818e988ad851294d0d6cc9e4c1ea4810b95ed4e494c3479848dc51ab744b3b0 not found: ID does not exist" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.398550 4770 scope.go:117] "RemoveContainer" containerID="67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1" Jan 26 19:07:00 crc kubenswrapper[4770]: E0126 19:07:00.398868 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1\": container with ID starting with 67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1 not found: ID does not exist" containerID="67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.398908 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1"} err="failed to get container status \"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1\": rpc error: code = NotFound desc = could not find container \"67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1\": container with ID starting with 67576e060b0ff5fbf67e8a749b02d6203a5810bd79986eac4115060d804c44e1 not found: ID does not exist" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.398938 4770 scope.go:117] "RemoveContainer" containerID="e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f" Jan 26 19:07:00 crc kubenswrapper[4770]: E0126 19:07:00.399177 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f\": container with ID starting with e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f not found: ID does not exist" containerID="e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f" Jan 26 19:07:00 crc kubenswrapper[4770]: I0126 19:07:00.399212 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f"} err="failed to get container status \"e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f\": rpc error: code = NotFound desc = could not find container \"e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f\": container with ID starting with e8af401457643ab9e2b79b8a282bb500cabc56a3a30ace91128d3dabdfa6d65f not found: ID does not exist" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.589329 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.589609 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.656153 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.779915 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" path="/var/lib/kubelet/pods/dde3cb38-99db-4fea-97a7-347da0271f6d/volumes" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.784362 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.922552 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7l49\" (UniqueName: \"kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49\") pod \"4ae332ee-80e2-4c02-a235-a318900f5ab4\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.922670 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle\") pod \"4ae332ee-80e2-4c02-a235-a318900f5ab4\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.922781 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory\") pod \"4ae332ee-80e2-4c02-a235-a318900f5ab4\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.922830 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam\") pod \"4ae332ee-80e2-4c02-a235-a318900f5ab4\" (UID: \"4ae332ee-80e2-4c02-a235-a318900f5ab4\") " Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.930156 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49" (OuterVolumeSpecName: "kube-api-access-l7l49") pod "4ae332ee-80e2-4c02-a235-a318900f5ab4" (UID: "4ae332ee-80e2-4c02-a235-a318900f5ab4"). InnerVolumeSpecName "kube-api-access-l7l49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.938383 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4ae332ee-80e2-4c02-a235-a318900f5ab4" (UID: "4ae332ee-80e2-4c02-a235-a318900f5ab4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.951744 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ae332ee-80e2-4c02-a235-a318900f5ab4" (UID: "4ae332ee-80e2-4c02-a235-a318900f5ab4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:01 crc kubenswrapper[4770]: I0126 19:07:01.966186 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory" (OuterVolumeSpecName: "inventory") pod "4ae332ee-80e2-4c02-a235-a318900f5ab4" (UID: "4ae332ee-80e2-4c02-a235-a318900f5ab4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.024522 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7l49\" (UniqueName: \"kubernetes.io/projected/4ae332ee-80e2-4c02-a235-a318900f5ab4-kube-api-access-l7l49\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.024554 4770 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.024570 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.024583 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ae332ee-80e2-4c02-a235-a318900f5ab4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.312011 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.311990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k" event={"ID":"4ae332ee-80e2-4c02-a235-a318900f5ab4","Type":"ContainerDied","Data":"88bbbec480661d5c0c15c6cf5d9151c375ee663b30588522c67db163921996d9"} Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.312086 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88bbbec480661d5c0c15c6cf5d9151c375ee663b30588522c67db163921996d9" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.444163 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.447770 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm"] Jan 26 19:07:02 crc kubenswrapper[4770]: E0126 19:07:02.451491 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="extract-utilities" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.451525 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="extract-utilities" Jan 26 19:07:02 crc kubenswrapper[4770]: E0126 19:07:02.451560 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="registry-server" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.451571 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="registry-server" Jan 26 19:07:02 crc kubenswrapper[4770]: E0126 19:07:02.451641 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="extract-content" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.451653 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="extract-content" Jan 26 19:07:02 crc kubenswrapper[4770]: E0126 19:07:02.451677 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae332ee-80e2-4c02-a235-a318900f5ab4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.451688 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae332ee-80e2-4c02-a235-a318900f5ab4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.455676 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde3cb38-99db-4fea-97a7-347da0271f6d" containerName="registry-server" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.455761 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae332ee-80e2-4c02-a235-a318900f5ab4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.458215 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.463949 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.470052 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.470318 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.470493 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.486708 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm"] Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.584070 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.584332 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.584577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjprq\" (UniqueName: \"kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.687276 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.687407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjprq\" (UniqueName: \"kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.687452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.692229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.693393 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.717081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjprq\" (UniqueName: \"kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-grrpm\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:02 crc kubenswrapper[4770]: I0126 19:07:02.784534 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:03 crc kubenswrapper[4770]: I0126 19:07:03.301389 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm"] Jan 26 19:07:03 crc kubenswrapper[4770]: W0126 19:07:03.304883 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbfc185f_efba_4b46_b49a_0045340ae3cc.slice/crio-e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b WatchSource:0}: Error finding container e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b: Status 404 returned error can't find the container with id e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b Jan 26 19:07:03 crc kubenswrapper[4770]: I0126 19:07:03.321749 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" event={"ID":"dbfc185f-efba-4b46-b49a-0045340ae3cc","Type":"ContainerStarted","Data":"e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b"} Jan 26 19:07:03 crc kubenswrapper[4770]: I0126 19:07:03.651724 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:07:04 crc kubenswrapper[4770]: I0126 19:07:04.337104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" event={"ID":"dbfc185f-efba-4b46-b49a-0045340ae3cc","Type":"ContainerStarted","Data":"3fb945a93474d3af46d154023297744eb00717455c70094ae487111aa2b0a142"} Jan 26 19:07:04 crc kubenswrapper[4770]: I0126 19:07:04.337161 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hvf2g" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="registry-server" containerID="cri-o://5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770" gracePeriod=2 Jan 26 19:07:04 crc kubenswrapper[4770]: I0126 19:07:04.365595 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" podStartSLOduration=1.888580275 podStartE2EDuration="2.365575248s" podCreationTimestamp="2026-01-26 19:07:02 +0000 UTC" firstStartedPulling="2026-01-26 19:07:03.30755232 +0000 UTC m=+1507.872459052" lastFinishedPulling="2026-01-26 19:07:03.784547293 +0000 UTC m=+1508.349454025" observedRunningTime="2026-01-26 19:07:04.355941792 +0000 UTC m=+1508.920848534" watchObservedRunningTime="2026-01-26 19:07:04.365575248 +0000 UTC m=+1508.930481980" Jan 26 19:07:04 crc kubenswrapper[4770]: I0126 19:07:04.842097 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.036138 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities\") pod \"a424e215-612b-4e14-be2f-19e83f95e8ce\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.036307 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content\") pod \"a424e215-612b-4e14-be2f-19e83f95e8ce\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.036416 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fg7b\" (UniqueName: \"kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b\") pod \"a424e215-612b-4e14-be2f-19e83f95e8ce\" (UID: \"a424e215-612b-4e14-be2f-19e83f95e8ce\") " Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.038040 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities" (OuterVolumeSpecName: "utilities") pod "a424e215-612b-4e14-be2f-19e83f95e8ce" (UID: "a424e215-612b-4e14-be2f-19e83f95e8ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.044978 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b" (OuterVolumeSpecName: "kube-api-access-7fg7b") pod "a424e215-612b-4e14-be2f-19e83f95e8ce" (UID: "a424e215-612b-4e14-be2f-19e83f95e8ce"). InnerVolumeSpecName "kube-api-access-7fg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.106141 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a424e215-612b-4e14-be2f-19e83f95e8ce" (UID: "a424e215-612b-4e14-be2f-19e83f95e8ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.139048 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fg7b\" (UniqueName: \"kubernetes.io/projected/a424e215-612b-4e14-be2f-19e83f95e8ce-kube-api-access-7fg7b\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.139089 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.139104 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a424e215-612b-4e14-be2f-19e83f95e8ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.349150 4770 generic.go:334] "Generic (PLEG): container finished" podID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerID="5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770" exitCode=0 Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.349218 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerDied","Data":"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770"} Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.349267 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hvf2g" event={"ID":"a424e215-612b-4e14-be2f-19e83f95e8ce","Type":"ContainerDied","Data":"ce79a87fa1fd37ee5d484d30c1c6b2e81c009d7cddd88736ee1c47933e2bacc4"} Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.349283 4770 scope.go:117] "RemoveContainer" containerID="5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.349399 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hvf2g" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.373094 4770 scope.go:117] "RemoveContainer" containerID="8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.399682 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.410867 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hvf2g"] Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.412884 4770 scope.go:117] "RemoveContainer" containerID="fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.475109 4770 scope.go:117] "RemoveContainer" containerID="5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770" Jan 26 19:07:05 crc kubenswrapper[4770]: E0126 19:07:05.483295 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770\": container with ID starting with 5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770 not found: ID does not exist" containerID="5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.483352 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770"} err="failed to get container status \"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770\": rpc error: code = NotFound desc = could not find container \"5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770\": container with ID starting with 5568b31b8c5b6b2e71a1f973ace41d5cd8fd048ec900b67cceceb52789f91770 not found: ID does not exist" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.483385 4770 scope.go:117] "RemoveContainer" containerID="8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5" Jan 26 19:07:05 crc kubenswrapper[4770]: E0126 19:07:05.489328 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5\": container with ID starting with 8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5 not found: ID does not exist" containerID="8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.489380 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5"} err="failed to get container status \"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5\": rpc error: code = NotFound desc = could not find container \"8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5\": container with ID starting with 8e6e0c5cc262d6de453caec414c5b5c0a4f5b74eb70772335a9ccaec9a41c9b5 not found: ID does not exist" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.489411 4770 scope.go:117] "RemoveContainer" containerID="fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4" Jan 26 19:07:05 crc kubenswrapper[4770]: E0126 19:07:05.493906 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4\": container with ID starting with fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4 not found: ID does not exist" containerID="fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.493957 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4"} err="failed to get container status \"fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4\": rpc error: code = NotFound desc = could not find container \"fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4\": container with ID starting with fbeedd8012b196596f67825957886891459076de4660b2033f6f216f7a1995f4 not found: ID does not exist" Jan 26 19:07:05 crc kubenswrapper[4770]: I0126 19:07:05.780064 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" path="/var/lib/kubelet/pods/a424e215-612b-4e14-be2f-19e83f95e8ce/volumes" Jan 26 19:07:07 crc kubenswrapper[4770]: I0126 19:07:07.373239 4770 generic.go:334] "Generic (PLEG): container finished" podID="dbfc185f-efba-4b46-b49a-0045340ae3cc" containerID="3fb945a93474d3af46d154023297744eb00717455c70094ae487111aa2b0a142" exitCode=0 Jan 26 19:07:07 crc kubenswrapper[4770]: I0126 19:07:07.373315 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" event={"ID":"dbfc185f-efba-4b46-b49a-0045340ae3cc","Type":"ContainerDied","Data":"3fb945a93474d3af46d154023297744eb00717455c70094ae487111aa2b0a142"} Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.818814 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.838784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjprq\" (UniqueName: \"kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq\") pod \"dbfc185f-efba-4b46-b49a-0045340ae3cc\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.838868 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam\") pod \"dbfc185f-efba-4b46-b49a-0045340ae3cc\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.838965 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory\") pod \"dbfc185f-efba-4b46-b49a-0045340ae3cc\" (UID: \"dbfc185f-efba-4b46-b49a-0045340ae3cc\") " Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.855599 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq" (OuterVolumeSpecName: "kube-api-access-pjprq") pod "dbfc185f-efba-4b46-b49a-0045340ae3cc" (UID: "dbfc185f-efba-4b46-b49a-0045340ae3cc"). InnerVolumeSpecName "kube-api-access-pjprq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.880999 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbfc185f-efba-4b46-b49a-0045340ae3cc" (UID: "dbfc185f-efba-4b46-b49a-0045340ae3cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.897320 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory" (OuterVolumeSpecName: "inventory") pod "dbfc185f-efba-4b46-b49a-0045340ae3cc" (UID: "dbfc185f-efba-4b46-b49a-0045340ae3cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.940413 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjprq\" (UniqueName: \"kubernetes.io/projected/dbfc185f-efba-4b46-b49a-0045340ae3cc-kube-api-access-pjprq\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.940444 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:08 crc kubenswrapper[4770]: I0126 19:07:08.940453 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbfc185f-efba-4b46-b49a-0045340ae3cc-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.393816 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" event={"ID":"dbfc185f-efba-4b46-b49a-0045340ae3cc","Type":"ContainerDied","Data":"e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b"} Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.393867 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ab3771c63731a593b68d321e2f1ee0acdcb0d2eeec508827b1945479ae4a2b" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.393952 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-grrpm" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.483720 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd"] Jan 26 19:07:09 crc kubenswrapper[4770]: E0126 19:07:09.484522 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="extract-utilities" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484543 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="extract-utilities" Jan 26 19:07:09 crc kubenswrapper[4770]: E0126 19:07:09.484563 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfc185f-efba-4b46-b49a-0045340ae3cc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484570 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfc185f-efba-4b46-b49a-0045340ae3cc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:09 crc kubenswrapper[4770]: E0126 19:07:09.484589 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="registry-server" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484594 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="registry-server" Jan 26 19:07:09 crc kubenswrapper[4770]: E0126 19:07:09.484607 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="extract-content" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484613 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="extract-content" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484799 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a424e215-612b-4e14-be2f-19e83f95e8ce" containerName="registry-server" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.484817 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfc185f-efba-4b46-b49a-0045340ae3cc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.485522 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.487786 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.488156 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.489424 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.489622 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.500136 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd"] Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.554778 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.554921 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.554975 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5sk5\" (UniqueName: \"kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.555033 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.655912 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.656040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.656108 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.656136 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5sk5\" (UniqueName: \"kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.662855 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.663496 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.675659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.677008 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5sk5\" (UniqueName: \"kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:09 crc kubenswrapper[4770]: I0126 19:07:09.810222 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:07:10 crc kubenswrapper[4770]: I0126 19:07:10.388673 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd"] Jan 26 19:07:11 crc kubenswrapper[4770]: I0126 19:07:11.410684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" event={"ID":"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5","Type":"ContainerStarted","Data":"a9afd9c95ec8c0f0e578b9f2c2554a6d93c0210892c8094c4db50ce51a56011a"} Jan 26 19:07:11 crc kubenswrapper[4770]: I0126 19:07:11.411211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" event={"ID":"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5","Type":"ContainerStarted","Data":"77ce8d1e01803f80fd68f13f5ae9ccc032385e64747f752e25f26758a19503de"} Jan 26 19:07:11 crc kubenswrapper[4770]: I0126 19:07:11.434651 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" podStartSLOduration=1.752710016 podStartE2EDuration="2.434629298s" podCreationTimestamp="2026-01-26 19:07:09 +0000 UTC" firstStartedPulling="2026-01-26 19:07:10.40777007 +0000 UTC m=+1514.972676802" lastFinishedPulling="2026-01-26 19:07:11.089689352 +0000 UTC m=+1515.654596084" observedRunningTime="2026-01-26 19:07:11.425965369 +0000 UTC m=+1515.990872101" watchObservedRunningTime="2026-01-26 19:07:11.434629298 +0000 UTC m=+1515.999536020" Jan 26 19:07:17 crc kubenswrapper[4770]: I0126 19:07:17.152351 4770 scope.go:117] "RemoveContainer" containerID="80ad2f4535615622c532dd8fefd8689692ed919e34cd83d0fa8991c1c8d9b3bb" Jan 26 19:07:17 crc kubenswrapper[4770]: I0126 19:07:17.183348 4770 scope.go:117] "RemoveContainer" containerID="1a7747b6b1e9668f53c62b6fda82b664cc40c65af97e3de8a0afa5897f46c891" Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.926211 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.930000 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.952422 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.964206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.964244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfstl\" (UniqueName: \"kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:19 crc kubenswrapper[4770]: I0126 19:07:19.964371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.065081 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.065179 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfstl\" (UniqueName: \"kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.065297 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.065726 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.065743 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.086264 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfstl\" (UniqueName: \"kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl\") pod \"community-operators-rpv47\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.247226 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:20 crc kubenswrapper[4770]: I0126 19:07:20.814708 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:21 crc kubenswrapper[4770]: I0126 19:07:21.518956 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerID="1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c" exitCode=0 Jan 26 19:07:21 crc kubenswrapper[4770]: I0126 19:07:21.519039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerDied","Data":"1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c"} Jan 26 19:07:21 crc kubenswrapper[4770]: I0126 19:07:21.519286 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerStarted","Data":"1500b50fc5481e0797e5533881fdbd1b315d634af9b30292bfd97933823357db"} Jan 26 19:07:22 crc kubenswrapper[4770]: I0126 19:07:22.535561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerStarted","Data":"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2"} Jan 26 19:07:23 crc kubenswrapper[4770]: I0126 19:07:23.548343 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerID="8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2" exitCode=0 Jan 26 19:07:23 crc kubenswrapper[4770]: I0126 19:07:23.548796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerDied","Data":"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2"} Jan 26 19:07:24 crc kubenswrapper[4770]: I0126 19:07:24.561150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerStarted","Data":"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88"} Jan 26 19:07:24 crc kubenswrapper[4770]: I0126 19:07:24.588049 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpv47" podStartSLOduration=3.133118147 podStartE2EDuration="5.588026563s" podCreationTimestamp="2026-01-26 19:07:19 +0000 UTC" firstStartedPulling="2026-01-26 19:07:21.521559109 +0000 UTC m=+1526.086465841" lastFinishedPulling="2026-01-26 19:07:23.976467525 +0000 UTC m=+1528.541374257" observedRunningTime="2026-01-26 19:07:24.580610869 +0000 UTC m=+1529.145517601" watchObservedRunningTime="2026-01-26 19:07:24.588026563 +0000 UTC m=+1529.152933295" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.248089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.248667 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.297373 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.330412 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.330473 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.695721 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:30 crc kubenswrapper[4770]: I0126 19:07:30.740588 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:32 crc kubenswrapper[4770]: I0126 19:07:32.650939 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpv47" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="registry-server" containerID="cri-o://ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88" gracePeriod=2 Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.151722 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.325344 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfstl\" (UniqueName: \"kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl\") pod \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.325429 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities\") pod \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.325487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content\") pod \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\" (UID: \"b0fdd3da-818f-4db2-a4d9-d348bad91e78\") " Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.326291 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities" (OuterVolumeSpecName: "utilities") pod "b0fdd3da-818f-4db2-a4d9-d348bad91e78" (UID: "b0fdd3da-818f-4db2-a4d9-d348bad91e78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.330978 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl" (OuterVolumeSpecName: "kube-api-access-dfstl") pod "b0fdd3da-818f-4db2-a4d9-d348bad91e78" (UID: "b0fdd3da-818f-4db2-a4d9-d348bad91e78"). InnerVolumeSpecName "kube-api-access-dfstl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.373574 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0fdd3da-818f-4db2-a4d9-d348bad91e78" (UID: "b0fdd3da-818f-4db2-a4d9-d348bad91e78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.427303 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfstl\" (UniqueName: \"kubernetes.io/projected/b0fdd3da-818f-4db2-a4d9-d348bad91e78-kube-api-access-dfstl\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.427338 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.427350 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fdd3da-818f-4db2-a4d9-d348bad91e78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.660070 4770 generic.go:334] "Generic (PLEG): container finished" podID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerID="ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88" exitCode=0 Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.660129 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpv47" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.660127 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerDied","Data":"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88"} Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.660214 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpv47" event={"ID":"b0fdd3da-818f-4db2-a4d9-d348bad91e78","Type":"ContainerDied","Data":"1500b50fc5481e0797e5533881fdbd1b315d634af9b30292bfd97933823357db"} Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.660257 4770 scope.go:117] "RemoveContainer" containerID="ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.685338 4770 scope.go:117] "RemoveContainer" containerID="8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.703037 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.713976 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpv47"] Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.728053 4770 scope.go:117] "RemoveContainer" containerID="1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.765220 4770 scope.go:117] "RemoveContainer" containerID="ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88" Jan 26 19:07:33 crc kubenswrapper[4770]: E0126 19:07:33.765741 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88\": container with ID starting with ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88 not found: ID does not exist" containerID="ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.765791 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88"} err="failed to get container status \"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88\": rpc error: code = NotFound desc = could not find container \"ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88\": container with ID starting with ae42cc11e7633e73ab4062262b13870e8b3be32b66fb07db29fa6594a1b68b88 not found: ID does not exist" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.765822 4770 scope.go:117] "RemoveContainer" containerID="8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2" Jan 26 19:07:33 crc kubenswrapper[4770]: E0126 19:07:33.767943 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2\": container with ID starting with 8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2 not found: ID does not exist" containerID="8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.767984 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2"} err="failed to get container status \"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2\": rpc error: code = NotFound desc = could not find container \"8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2\": container with ID starting with 8cadeb40363a980f87673e7dd652c5ba09706d13889aa3e226247ed776e00ea2 not found: ID does not exist" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.768008 4770 scope.go:117] "RemoveContainer" containerID="1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c" Jan 26 19:07:33 crc kubenswrapper[4770]: E0126 19:07:33.768307 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c\": container with ID starting with 1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c not found: ID does not exist" containerID="1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.768328 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c"} err="failed to get container status \"1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c\": rpc error: code = NotFound desc = could not find container \"1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c\": container with ID starting with 1ee36bed71de68bbf4217a3c102e73f1f202569b8c217b06a9801ca99dda382c not found: ID does not exist" Jan 26 19:07:33 crc kubenswrapper[4770]: I0126 19:07:33.779001 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" path="/var/lib/kubelet/pods/b0fdd3da-818f-4db2-a4d9-d348bad91e78/volumes" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.331131 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.331641 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.331682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.332419 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.332465 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" gracePeriod=600 Jan 26 19:08:00 crc kubenswrapper[4770]: E0126 19:08:00.454067 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.950071 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" exitCode=0 Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.950123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3"} Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.950176 4770 scope.go:117] "RemoveContainer" containerID="386f64784b2c322d50fefdfd9ed37a3405a8ac95082cf30f59e32e718434f3cd" Jan 26 19:08:00 crc kubenswrapper[4770]: I0126 19:08:00.951338 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:08:00 crc kubenswrapper[4770]: E0126 19:08:00.952028 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:08:15 crc kubenswrapper[4770]: I0126 19:08:15.774035 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:08:15 crc kubenswrapper[4770]: E0126 19:08:15.774927 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:08:17 crc kubenswrapper[4770]: I0126 19:08:17.362597 4770 scope.go:117] "RemoveContainer" containerID="00903d6abfbf16fa4eafacde69687e6d84a3a183cb253c689eedfbafe4d0fda0" Jan 26 19:08:30 crc kubenswrapper[4770]: I0126 19:08:30.767970 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:08:30 crc kubenswrapper[4770]: E0126 19:08:30.768634 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:08:42 crc kubenswrapper[4770]: I0126 19:08:42.767689 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:08:42 crc kubenswrapper[4770]: E0126 19:08:42.768516 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:08:57 crc kubenswrapper[4770]: I0126 19:08:57.767193 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:08:57 crc kubenswrapper[4770]: E0126 19:08:57.768341 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:09:11 crc kubenswrapper[4770]: I0126 19:09:11.767594 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:09:11 crc kubenswrapper[4770]: E0126 19:09:11.768794 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:09:17 crc kubenswrapper[4770]: I0126 19:09:17.472981 4770 scope.go:117] "RemoveContainer" containerID="04d964d355e8e8ea6edd62ea5207b51e10c5fcb29494bcd4a887077bec5b6f38" Jan 26 19:09:17 crc kubenswrapper[4770]: I0126 19:09:17.517939 4770 scope.go:117] "RemoveContainer" containerID="67489b7c08648a6b6e4d621c1f3728a8eba595ee5425af3a116e6ada81b58764" Jan 26 19:09:17 crc kubenswrapper[4770]: I0126 19:09:17.543422 4770 scope.go:117] "RemoveContainer" containerID="896ba8a8849e0da6046913e6ff16538d82573ee21de21bd8e4a0dcd7423595af" Jan 26 19:09:17 crc kubenswrapper[4770]: I0126 19:09:17.561769 4770 scope.go:117] "RemoveContainer" containerID="102eb892f1022d60f9dd531b5b4bbe8fac91a0df1bdd9a8ac38dada4a4116e4b" Jan 26 19:09:17 crc kubenswrapper[4770]: I0126 19:09:17.589249 4770 scope.go:117] "RemoveContainer" containerID="fb5df5fe8c56a40ce42b6fff7a1ddd31732ef697a9949d96908c818b99184329" Jan 26 19:09:26 crc kubenswrapper[4770]: I0126 19:09:26.768859 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:09:26 crc kubenswrapper[4770]: E0126 19:09:26.769434 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:09:37 crc kubenswrapper[4770]: I0126 19:09:37.767857 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:09:37 crc kubenswrapper[4770]: E0126 19:09:37.768759 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:09:50 crc kubenswrapper[4770]: I0126 19:09:50.767059 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:09:50 crc kubenswrapper[4770]: E0126 19:09:50.767941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:01 crc kubenswrapper[4770]: I0126 19:10:01.767639 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:10:01 crc kubenswrapper[4770]: E0126 19:10:01.768686 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:16 crc kubenswrapper[4770]: I0126 19:10:16.768789 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:10:16 crc kubenswrapper[4770]: E0126 19:10:16.770007 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:25 crc kubenswrapper[4770]: I0126 19:10:25.519027 4770 generic.go:334] "Generic (PLEG): container finished" podID="57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" containerID="a9afd9c95ec8c0f0e578b9f2c2554a6d93c0210892c8094c4db50ce51a56011a" exitCode=0 Jan 26 19:10:25 crc kubenswrapper[4770]: I0126 19:10:25.519176 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" event={"ID":"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5","Type":"ContainerDied","Data":"a9afd9c95ec8c0f0e578b9f2c2554a6d93c0210892c8094c4db50ce51a56011a"} Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.022842 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.108428 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5sk5\" (UniqueName: \"kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5\") pod \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.108513 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam\") pod \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.108732 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory\") pod \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.108903 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle\") pod \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\" (UID: \"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5\") " Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.115100 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5" (OuterVolumeSpecName: "kube-api-access-q5sk5") pod "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" (UID: "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5"). InnerVolumeSpecName "kube-api-access-q5sk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.117187 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" (UID: "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.137750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" (UID: "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.150455 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory" (OuterVolumeSpecName: "inventory") pod "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" (UID: "57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.210598 4770 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.211148 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5sk5\" (UniqueName: \"kubernetes.io/projected/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-kube-api-access-q5sk5\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.211205 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.211260 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.542817 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" event={"ID":"57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5","Type":"ContainerDied","Data":"77ce8d1e01803f80fd68f13f5ae9ccc032385e64747f752e25f26758a19503de"} Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.542858 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ce8d1e01803f80fd68f13f5ae9ccc032385e64747f752e25f26758a19503de" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.542887 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.667390 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k"] Jan 26 19:10:27 crc kubenswrapper[4770]: E0126 19:10:27.667961 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="extract-content" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.667984 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="extract-content" Jan 26 19:10:27 crc kubenswrapper[4770]: E0126 19:10:27.668010 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.668022 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:27 crc kubenswrapper[4770]: E0126 19:10:27.668040 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="registry-server" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.668047 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="registry-server" Jan 26 19:10:27 crc kubenswrapper[4770]: E0126 19:10:27.668078 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="extract-utilities" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.668086 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="extract-utilities" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.668287 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.668304 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fdd3da-818f-4db2-a4d9-d348bad91e78" containerName="registry-server" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.669144 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.672240 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.672554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.672678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.675273 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.678896 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k"] Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.719904 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.719968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.720061 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l69wl\" (UniqueName: \"kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.821233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.821289 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.821368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l69wl\" (UniqueName: \"kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.828269 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.845758 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.848354 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l69wl\" (UniqueName: \"kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:27 crc kubenswrapper[4770]: I0126 19:10:27.998988 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:10:28 crc kubenswrapper[4770]: I0126 19:10:28.593195 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k"] Jan 26 19:10:28 crc kubenswrapper[4770]: I0126 19:10:28.598475 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.067579 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zbpfl"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.082015 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pckwh"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.100387 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0242-account-create-update-kpq4x"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.111769 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b35e-account-create-update-7wnpn"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.120140 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b35e-account-create-update-7wnpn"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.134290 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pckwh"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.149375 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zbpfl"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.162443 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0242-account-create-update-kpq4x"] Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.563295 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" event={"ID":"f9cfc064-c4a3-42cf-8193-9090da67b4db","Type":"ContainerStarted","Data":"c7d5b59cd3fcfc24552df2e8e106d472e9cce17f36f9eba362b4f63be9cd1c9b"} Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.564391 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" event={"ID":"f9cfc064-c4a3-42cf-8193-9090da67b4db","Type":"ContainerStarted","Data":"eee6c9e2eb984181fd3123934e3c5c96f7a68a60cc4223dcc49e820dbe6cda96"} Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.595436 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" podStartSLOduration=2.085274902 podStartE2EDuration="2.595400797s" podCreationTimestamp="2026-01-26 19:10:27 +0000 UTC" firstStartedPulling="2026-01-26 19:10:28.598108747 +0000 UTC m=+1713.163015509" lastFinishedPulling="2026-01-26 19:10:29.108234672 +0000 UTC m=+1713.673141404" observedRunningTime="2026-01-26 19:10:29.585853335 +0000 UTC m=+1714.150760107" watchObservedRunningTime="2026-01-26 19:10:29.595400797 +0000 UTC m=+1714.160307539" Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.789430 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b63ae4-9e1b-4518-b09f-3b5f3893a51e" path="/var/lib/kubelet/pods/93b63ae4-9e1b-4518-b09f-3b5f3893a51e/volumes" Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.790761 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b724a151-32e2-4518-8f64-9d06b50acd55" path="/var/lib/kubelet/pods/b724a151-32e2-4518-8f64-9d06b50acd55/volumes" Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.792802 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76dedbd-e05f-4893-a0b5-9c68a83eb5f4" path="/var/lib/kubelet/pods/d76dedbd-e05f-4893-a0b5-9c68a83eb5f4/volumes" Jan 26 19:10:29 crc kubenswrapper[4770]: I0126 19:10:29.794137 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12040f8-22b1-43fe-a86f-6d39c1ac4c8b" path="/var/lib/kubelet/pods/e12040f8-22b1-43fe-a86f-6d39c1ac4c8b/volumes" Jan 26 19:10:30 crc kubenswrapper[4770]: I0126 19:10:30.767691 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:10:30 crc kubenswrapper[4770]: E0126 19:10:30.768332 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:34 crc kubenswrapper[4770]: I0126 19:10:34.046482 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-6pgvv"] Jan 26 19:10:34 crc kubenswrapper[4770]: I0126 19:10:34.058006 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-0d4a-account-create-update-42d77"] Jan 26 19:10:34 crc kubenswrapper[4770]: I0126 19:10:34.070654 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-6pgvv"] Jan 26 19:10:34 crc kubenswrapper[4770]: I0126 19:10:34.083490 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-0d4a-account-create-update-42d77"] Jan 26 19:10:35 crc kubenswrapper[4770]: I0126 19:10:35.777677 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e710d1c6-ece5-400d-b061-8ad6cf59c5b6" path="/var/lib/kubelet/pods/e710d1c6-ece5-400d-b061-8ad6cf59c5b6/volumes" Jan 26 19:10:35 crc kubenswrapper[4770]: I0126 19:10:35.778497 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e935c454-cbcc-4b53-a12e-4532e2043189" path="/var/lib/kubelet/pods/e935c454-cbcc-4b53-a12e-4532e2043189/volumes" Jan 26 19:10:42 crc kubenswrapper[4770]: I0126 19:10:42.767431 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:10:42 crc kubenswrapper[4770]: E0126 19:10:42.769233 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:57 crc kubenswrapper[4770]: I0126 19:10:57.767511 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:10:57 crc kubenswrapper[4770]: E0126 19:10:57.768472 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:10:58 crc kubenswrapper[4770]: I0126 19:10:58.054489 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-57gj7"] Jan 26 19:10:58 crc kubenswrapper[4770]: I0126 19:10:58.068434 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-57gj7"] Jan 26 19:10:59 crc kubenswrapper[4770]: I0126 19:10:59.777434 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2" path="/var/lib/kubelet/pods/d3d3d928-a9eb-4751-91e9-7e8fbb47d3d2/volumes" Jan 26 19:11:12 crc kubenswrapper[4770]: I0126 19:11:12.767689 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:11:12 crc kubenswrapper[4770]: E0126 19:11:12.768633 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:11:16 crc kubenswrapper[4770]: I0126 19:11:16.058191 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5bnp2"] Jan 26 19:11:16 crc kubenswrapper[4770]: I0126 19:11:16.073752 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mnwcn"] Jan 26 19:11:16 crc kubenswrapper[4770]: I0126 19:11:16.082646 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mnwcn"] Jan 26 19:11:16 crc kubenswrapper[4770]: I0126 19:11:16.091196 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5bnp2"] Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.717861 4770 scope.go:117] "RemoveContainer" containerID="05aec97748948c51648b73e9b3cda44e32f56912b7cb5597778e6de5ca0f1a52" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.743067 4770 scope.go:117] "RemoveContainer" containerID="8840c5670b533d10530f2c00303dd6291a56339e7c629941ed2c2bf229eebda8" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.787218 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9f9f5b-1111-4f22-abe2-7146071528f9" path="/var/lib/kubelet/pods/9e9f9f5b-1111-4f22-abe2-7146071528f9/volumes" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.788281 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97d19ba-991c-40e1-85cb-fd0402872336" path="/var/lib/kubelet/pods/d97d19ba-991c-40e1-85cb-fd0402872336/volumes" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.810906 4770 scope.go:117] "RemoveContainer" containerID="0ec454fcba45b29b799b5e5e1b8b87758ecccaf468c7db65bc685faf08638293" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.867522 4770 scope.go:117] "RemoveContainer" containerID="ad0708d6bbfef49a6d1035fa276733220f5b711e07f89383377b05cb112f3ab2" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.902565 4770 scope.go:117] "RemoveContainer" containerID="b9f7e457601340fabc71e1711cc35493fbf51a197bf483fb270a69dbdc35aeae" Jan 26 19:11:17 crc kubenswrapper[4770]: I0126 19:11:17.971572 4770 scope.go:117] "RemoveContainer" containerID="1df95aa8bbb4d4cf4cc938ec31226afdccfdaf03e01e29c8a65a2addc3e7b498" Jan 26 19:11:18 crc kubenswrapper[4770]: I0126 19:11:18.013826 4770 scope.go:117] "RemoveContainer" containerID="c1d3dab9457a2832e8c598970388929b33fa495a42d90bf31e7933d0f4ac9939" Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.036716 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1c22-account-create-update-n8c27"] Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.048343 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1c22-account-create-update-n8c27"] Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.057661 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d8e4-account-create-update-lxtmh"] Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.067261 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d8e4-account-create-update-lxtmh"] Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.782294 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c424d7-fc72-42a8-a2f4-206786467a86" path="/var/lib/kubelet/pods/27c424d7-fc72-42a8-a2f4-206786467a86/volumes" Jan 26 19:11:19 crc kubenswrapper[4770]: I0126 19:11:19.783332 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4ddc80-a3e0-4ef0-930f-e8778893071b" path="/var/lib/kubelet/pods/9f4ddc80-a3e0-4ef0-930f-e8778893071b/volumes" Jan 26 19:11:24 crc kubenswrapper[4770]: I0126 19:11:24.767064 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:11:24 crc kubenswrapper[4770]: E0126 19:11:24.767831 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.040245 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-25af-account-create-update-vx8h2"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.065743 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l8r5x"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.081034 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-68ff-account-create-update-64vn4"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.089341 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bxwvd"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.096488 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-25af-account-create-update-vx8h2"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.103334 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l8r5x"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.110761 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bxwvd"] Jan 26 19:11:26 crc kubenswrapper[4770]: I0126 19:11:26.119203 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-68ff-account-create-update-64vn4"] Jan 26 19:11:27 crc kubenswrapper[4770]: I0126 19:11:27.780826 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62c35c81-2111-46fb-b0c8-4e426d1d32f9" path="/var/lib/kubelet/pods/62c35c81-2111-46fb-b0c8-4e426d1d32f9/volumes" Jan 26 19:11:27 crc kubenswrapper[4770]: I0126 19:11:27.781411 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670a14aa-6ae2-42a1-8ab2-c0b13d56cb05" path="/var/lib/kubelet/pods/670a14aa-6ae2-42a1-8ab2-c0b13d56cb05/volumes" Jan 26 19:11:27 crc kubenswrapper[4770]: I0126 19:11:27.781993 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18b03d7-9247-4f08-b476-558e77605786" path="/var/lib/kubelet/pods/b18b03d7-9247-4f08-b476-558e77605786/volumes" Jan 26 19:11:27 crc kubenswrapper[4770]: I0126 19:11:27.782488 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bf6c96-e816-4d9c-890e-e347005628ec" path="/var/lib/kubelet/pods/e6bf6c96-e816-4d9c-890e-e347005628ec/volumes" Jan 26 19:11:29 crc kubenswrapper[4770]: I0126 19:11:29.032467 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x5wgl"] Jan 26 19:11:29 crc kubenswrapper[4770]: I0126 19:11:29.043074 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x5wgl"] Jan 26 19:11:29 crc kubenswrapper[4770]: I0126 19:11:29.782813 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2193ed97-12f7-437a-a441-222e00b8831d" path="/var/lib/kubelet/pods/2193ed97-12f7-437a-a441-222e00b8831d/volumes" Jan 26 19:11:32 crc kubenswrapper[4770]: I0126 19:11:32.041241 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-bx5vx"] Jan 26 19:11:32 crc kubenswrapper[4770]: I0126 19:11:32.052673 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-bx5vx"] Jan 26 19:11:33 crc kubenswrapper[4770]: I0126 19:11:33.781793 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19ec737-f43c-4c4d-b6b0-16b535709eb6" path="/var/lib/kubelet/pods/e19ec737-f43c-4c4d-b6b0-16b535709eb6/volumes" Jan 26 19:11:39 crc kubenswrapper[4770]: I0126 19:11:39.767650 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:11:39 crc kubenswrapper[4770]: E0126 19:11:39.768311 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:11:54 crc kubenswrapper[4770]: I0126 19:11:54.767123 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:11:54 crc kubenswrapper[4770]: E0126 19:11:54.768022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:12:09 crc kubenswrapper[4770]: I0126 19:12:09.767285 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:12:09 crc kubenswrapper[4770]: E0126 19:12:09.768140 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:12:11 crc kubenswrapper[4770]: I0126 19:12:11.050849 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cd84h"] Jan 26 19:12:11 crc kubenswrapper[4770]: I0126 19:12:11.064348 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cd84h"] Jan 26 19:12:11 crc kubenswrapper[4770]: I0126 19:12:11.778325 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b185c8a-0b51-4433-9e44-2121cb5415ba" path="/var/lib/kubelet/pods/0b185c8a-0b51-4433-9e44-2121cb5415ba/volumes" Jan 26 19:12:12 crc kubenswrapper[4770]: I0126 19:12:12.612495 4770 generic.go:334] "Generic (PLEG): container finished" podID="f9cfc064-c4a3-42cf-8193-9090da67b4db" containerID="c7d5b59cd3fcfc24552df2e8e106d472e9cce17f36f9eba362b4f63be9cd1c9b" exitCode=0 Jan 26 19:12:12 crc kubenswrapper[4770]: I0126 19:12:12.612535 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" event={"ID":"f9cfc064-c4a3-42cf-8193-9090da67b4db","Type":"ContainerDied","Data":"c7d5b59cd3fcfc24552df2e8e106d472e9cce17f36f9eba362b4f63be9cd1c9b"} Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.112027 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.221912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory\") pod \"f9cfc064-c4a3-42cf-8193-9090da67b4db\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.222205 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam\") pod \"f9cfc064-c4a3-42cf-8193-9090da67b4db\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.222290 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l69wl\" (UniqueName: \"kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl\") pod \"f9cfc064-c4a3-42cf-8193-9090da67b4db\" (UID: \"f9cfc064-c4a3-42cf-8193-9090da67b4db\") " Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.232024 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl" (OuterVolumeSpecName: "kube-api-access-l69wl") pod "f9cfc064-c4a3-42cf-8193-9090da67b4db" (UID: "f9cfc064-c4a3-42cf-8193-9090da67b4db"). InnerVolumeSpecName "kube-api-access-l69wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.248971 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory" (OuterVolumeSpecName: "inventory") pod "f9cfc064-c4a3-42cf-8193-9090da67b4db" (UID: "f9cfc064-c4a3-42cf-8193-9090da67b4db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.260147 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9cfc064-c4a3-42cf-8193-9090da67b4db" (UID: "f9cfc064-c4a3-42cf-8193-9090da67b4db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.324266 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.324306 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l69wl\" (UniqueName: \"kubernetes.io/projected/f9cfc064-c4a3-42cf-8193-9090da67b4db-kube-api-access-l69wl\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.324320 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cfc064-c4a3-42cf-8193-9090da67b4db-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.634416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" event={"ID":"f9cfc064-c4a3-42cf-8193-9090da67b4db","Type":"ContainerDied","Data":"eee6c9e2eb984181fd3123934e3c5c96f7a68a60cc4223dcc49e820dbe6cda96"} Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.634728 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee6c9e2eb984181fd3123934e3c5c96f7a68a60cc4223dcc49e820dbe6cda96" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.634511 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.741278 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d"] Jan 26 19:12:14 crc kubenswrapper[4770]: E0126 19:12:14.741799 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cfc064-c4a3-42cf-8193-9090da67b4db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.741827 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cfc064-c4a3-42cf-8193-9090da67b4db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.742110 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cfc064-c4a3-42cf-8193-9090da67b4db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.743023 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.746962 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.747272 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.747436 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.747668 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.775282 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d"] Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.833956 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wx2h\" (UniqueName: \"kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.834161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.834271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.935961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wx2h\" (UniqueName: \"kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.936065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.936112 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.940425 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.941382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:14 crc kubenswrapper[4770]: I0126 19:12:14.952435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wx2h\" (UniqueName: \"kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:15 crc kubenswrapper[4770]: I0126 19:12:15.063325 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:12:15 crc kubenswrapper[4770]: I0126 19:12:15.627876 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d"] Jan 26 19:12:15 crc kubenswrapper[4770]: I0126 19:12:15.646128 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" event={"ID":"f64f037e-f80f-4f8d-be06-9917ac988deb","Type":"ContainerStarted","Data":"f98bc98e0040dd8759c5fa4e8404e4e894575ec9bb6797c94607107d8a713fa3"} Jan 26 19:12:16 crc kubenswrapper[4770]: I0126 19:12:16.657913 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" event={"ID":"f64f037e-f80f-4f8d-be06-9917ac988deb","Type":"ContainerStarted","Data":"c6a1d0e68a267325bd202626716f5f45d1dda8ff6d671509248c8d2f43f4b41f"} Jan 26 19:12:16 crc kubenswrapper[4770]: I0126 19:12:16.685994 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" podStartSLOduration=2.102022204 podStartE2EDuration="2.685969347s" podCreationTimestamp="2026-01-26 19:12:14 +0000 UTC" firstStartedPulling="2026-01-26 19:12:15.631680601 +0000 UTC m=+1820.196587343" lastFinishedPulling="2026-01-26 19:12:16.215627754 +0000 UTC m=+1820.780534486" observedRunningTime="2026-01-26 19:12:16.674909863 +0000 UTC m=+1821.239824735" watchObservedRunningTime="2026-01-26 19:12:16.685969347 +0000 UTC m=+1821.250876099" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.154552 4770 scope.go:117] "RemoveContainer" containerID="b6f9bcf8b839ee9eb8f0eaacd1a725c981257abebe71bb6520c368768407b98e" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.191031 4770 scope.go:117] "RemoveContainer" containerID="1b515a6e4f6ab7b9dad1d7541eb26dbebd67769fc01dea35e360b2137d5b83e5" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.255613 4770 scope.go:117] "RemoveContainer" containerID="082c7207f1ea69779310718fa94a039b463e1c03c684577c0db15b9cb0f1b6cc" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.315958 4770 scope.go:117] "RemoveContainer" containerID="23eaebba4d4226fc37a80bb3c73d87808cfac59bd0ef2aca4607998853b39bc4" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.352842 4770 scope.go:117] "RemoveContainer" containerID="bc25c7e207907afb161546c49b113cee4d85aa5316c0704cad6c2422fbe7c529" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.426209 4770 scope.go:117] "RemoveContainer" containerID="e41a04982fb7f75389f695341d845e09bae5d2322f94855777d1671d61b45686" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.454490 4770 scope.go:117] "RemoveContainer" containerID="180cd83722999e0d55775e13dfb4d83e5b178ea5b1e89829ff123d1ef269f8c3" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.475099 4770 scope.go:117] "RemoveContainer" containerID="352549a81e61f1491bfce3b8bbcf817ece455aafeee464822150a9915376d5a3" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.495391 4770 scope.go:117] "RemoveContainer" containerID="3bb67bb8f1adc12df75be8ad65f1124d23dea1c26ed852be48c3eaa5da788164" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.515750 4770 scope.go:117] "RemoveContainer" containerID="a40087c67e1c636108224e2070760b990493387ddd33e393e5b7ff0dd586c058" Jan 26 19:12:18 crc kubenswrapper[4770]: I0126 19:12:18.552575 4770 scope.go:117] "RemoveContainer" containerID="0e031a1c6fccaa4c836ce83f8de7a029ed38386b5518de4a7bc02052f72b9103" Jan 26 19:12:23 crc kubenswrapper[4770]: I0126 19:12:23.767939 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:12:23 crc kubenswrapper[4770]: E0126 19:12:23.769008 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:12:27 crc kubenswrapper[4770]: I0126 19:12:27.038025 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sz7sx"] Jan 26 19:12:27 crc kubenswrapper[4770]: I0126 19:12:27.052440 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sz7sx"] Jan 26 19:12:27 crc kubenswrapper[4770]: I0126 19:12:27.788139 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1913a3-ef04-48f1-9e48-d669c97e66cb" path="/var/lib/kubelet/pods/2a1913a3-ef04-48f1-9e48-d669c97e66cb/volumes" Jan 26 19:12:28 crc kubenswrapper[4770]: I0126 19:12:28.034182 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wjwrr"] Jan 26 19:12:28 crc kubenswrapper[4770]: I0126 19:12:28.041671 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wjwrr"] Jan 26 19:12:29 crc kubenswrapper[4770]: I0126 19:12:29.027251 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tx8s8"] Jan 26 19:12:29 crc kubenswrapper[4770]: I0126 19:12:29.035487 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tx8s8"] Jan 26 19:12:29 crc kubenswrapper[4770]: I0126 19:12:29.782605 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380a5f13-cc8e-42b0-92db-e487e61edcb9" path="/var/lib/kubelet/pods/380a5f13-cc8e-42b0-92db-e487e61edcb9/volumes" Jan 26 19:12:29 crc kubenswrapper[4770]: I0126 19:12:29.784190 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd21f2e-d98a-4363-afc3-5707b0ee540d" path="/var/lib/kubelet/pods/8cd21f2e-d98a-4363-afc3-5707b0ee540d/volumes" Jan 26 19:12:36 crc kubenswrapper[4770]: I0126 19:12:36.776476 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:12:36 crc kubenswrapper[4770]: E0126 19:12:36.777202 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:12:40 crc kubenswrapper[4770]: I0126 19:12:40.040031 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-f98bs"] Jan 26 19:12:40 crc kubenswrapper[4770]: I0126 19:12:40.051330 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-f98bs"] Jan 26 19:12:41 crc kubenswrapper[4770]: I0126 19:12:41.777270 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200a66de-48c2-4fad-babc-4e45e99790cd" path="/var/lib/kubelet/pods/200a66de-48c2-4fad-babc-4e45e99790cd/volumes" Jan 26 19:12:43 crc kubenswrapper[4770]: I0126 19:12:43.063447 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-q2sdv"] Jan 26 19:12:43 crc kubenswrapper[4770]: I0126 19:12:43.073100 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-q2sdv"] Jan 26 19:12:43 crc kubenswrapper[4770]: I0126 19:12:43.784862 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d149076-49cc-4a5a-80f8-c34dac1c2b45" path="/var/lib/kubelet/pods/9d149076-49cc-4a5a-80f8-c34dac1c2b45/volumes" Jan 26 19:12:47 crc kubenswrapper[4770]: I0126 19:12:47.767497 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:12:47 crc kubenswrapper[4770]: E0126 19:12:47.768213 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:13:02 crc kubenswrapper[4770]: I0126 19:13:02.768123 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:13:03 crc kubenswrapper[4770]: I0126 19:13:03.155265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641"} Jan 26 19:13:18 crc kubenswrapper[4770]: I0126 19:13:18.756208 4770 scope.go:117] "RemoveContainer" containerID="9c588a6d2154e7bd801452921a77a730a74ec89edbdc20f69b021498ae749d2f" Jan 26 19:13:18 crc kubenswrapper[4770]: I0126 19:13:18.818258 4770 scope.go:117] "RemoveContainer" containerID="725d45f04a1a6f23a0aa0a8e35fac44c1410e163f48c70408f194d0a3641477a" Jan 26 19:13:18 crc kubenswrapper[4770]: I0126 19:13:18.877993 4770 scope.go:117] "RemoveContainer" containerID="2b3d11a27f6e7d1b76edaf917c2ad0fc65b2bdb9bba43ca41fca50e159770ad7" Jan 26 19:13:18 crc kubenswrapper[4770]: I0126 19:13:18.926399 4770 scope.go:117] "RemoveContainer" containerID="96f1180ba7c1a64658df24ca29485dd9bae37d7debfac2c9edd662e7afa48114" Jan 26 19:13:18 crc kubenswrapper[4770]: I0126 19:13:18.976898 4770 scope.go:117] "RemoveContainer" containerID="1ccec2d55f09f36fd639413394264d95c642e5899e056eb76b6565b818f4a0f3" Jan 26 19:13:20 crc kubenswrapper[4770]: I0126 19:13:20.055890 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7d5e-account-create-update-th69h"] Jan 26 19:13:20 crc kubenswrapper[4770]: I0126 19:13:20.072678 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7d5e-account-create-update-th69h"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.041585 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-016a-account-create-update-spb7k"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.056769 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e5ff-account-create-update-ptwhv"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.085918 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e5ff-account-create-update-ptwhv"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.097300 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-t69pr"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.107029 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-016a-account-create-update-spb7k"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.116454 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7pgdw"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.127536 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7pgdw"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.141538 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-t69pr"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.154144 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-blkxf"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.164647 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-blkxf"] Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.781056 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269c34bc-d8f7-4c68-bbff-ff5ff812de92" path="/var/lib/kubelet/pods/269c34bc-d8f7-4c68-bbff-ff5ff812de92/volumes" Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.782313 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298782e0-4453-412a-b9c9-08a16d4317d6" path="/var/lib/kubelet/pods/298782e0-4453-412a-b9c9-08a16d4317d6/volumes" Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.783474 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53685f07-5a65-44be-b2e9-1eb713d3ab04" path="/var/lib/kubelet/pods/53685f07-5a65-44be-b2e9-1eb713d3ab04/volumes" Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.784641 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602f406f-cd56-4b2b-8709-8114f7e1d34a" path="/var/lib/kubelet/pods/602f406f-cd56-4b2b-8709-8114f7e1d34a/volumes" Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.786513 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda29038-706d-42e7-9b63-f6c2a3313ff3" path="/var/lib/kubelet/pods/cda29038-706d-42e7-9b63-f6c2a3313ff3/volumes" Jan 26 19:13:21 crc kubenswrapper[4770]: I0126 19:13:21.787125 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad84030-54e1-4ca4-a1a5-1c2bac22679b" path="/var/lib/kubelet/pods/dad84030-54e1-4ca4-a1a5-1c2bac22679b/volumes" Jan 26 19:13:36 crc kubenswrapper[4770]: I0126 19:13:36.819135 4770 generic.go:334] "Generic (PLEG): container finished" podID="f64f037e-f80f-4f8d-be06-9917ac988deb" containerID="c6a1d0e68a267325bd202626716f5f45d1dda8ff6d671509248c8d2f43f4b41f" exitCode=0 Jan 26 19:13:36 crc kubenswrapper[4770]: I0126 19:13:36.819230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" event={"ID":"f64f037e-f80f-4f8d-be06-9917ac988deb","Type":"ContainerDied","Data":"c6a1d0e68a267325bd202626716f5f45d1dda8ff6d671509248c8d2f43f4b41f"} Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.222130 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.410466 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory\") pod \"f64f037e-f80f-4f8d-be06-9917ac988deb\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.410556 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wx2h\" (UniqueName: \"kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h\") pod \"f64f037e-f80f-4f8d-be06-9917ac988deb\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.410645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam\") pod \"f64f037e-f80f-4f8d-be06-9917ac988deb\" (UID: \"f64f037e-f80f-4f8d-be06-9917ac988deb\") " Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.418511 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h" (OuterVolumeSpecName: "kube-api-access-7wx2h") pod "f64f037e-f80f-4f8d-be06-9917ac988deb" (UID: "f64f037e-f80f-4f8d-be06-9917ac988deb"). InnerVolumeSpecName "kube-api-access-7wx2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.446385 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f64f037e-f80f-4f8d-be06-9917ac988deb" (UID: "f64f037e-f80f-4f8d-be06-9917ac988deb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.473331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory" (OuterVolumeSpecName: "inventory") pod "f64f037e-f80f-4f8d-be06-9917ac988deb" (UID: "f64f037e-f80f-4f8d-be06-9917ac988deb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.513427 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.513457 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wx2h\" (UniqueName: \"kubernetes.io/projected/f64f037e-f80f-4f8d-be06-9917ac988deb-kube-api-access-7wx2h\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.513466 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f64f037e-f80f-4f8d-be06-9917ac988deb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.846167 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" event={"ID":"f64f037e-f80f-4f8d-be06-9917ac988deb","Type":"ContainerDied","Data":"f98bc98e0040dd8759c5fa4e8404e4e894575ec9bb6797c94607107d8a713fa3"} Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.846225 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f98bc98e0040dd8759c5fa4e8404e4e894575ec9bb6797c94607107d8a713fa3" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.846342 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.931396 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6"] Jan 26 19:13:38 crc kubenswrapper[4770]: E0126 19:13:38.932101 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64f037e-f80f-4f8d-be06-9917ac988deb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.932134 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64f037e-f80f-4f8d-be06-9917ac988deb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.932489 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64f037e-f80f-4f8d-be06-9917ac988deb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.933743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.936505 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.936589 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.936767 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.937166 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:13:38 crc kubenswrapper[4770]: I0126 19:13:38.947103 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6"] Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.124347 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.124440 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk289\" (UniqueName: \"kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.124797 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.227095 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk289\" (UniqueName: \"kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.227277 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.227534 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.231840 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.232341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.246730 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk289\" (UniqueName: \"kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-td8t6\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.260883 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:39 crc kubenswrapper[4770]: I0126 19:13:39.868157 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6"] Jan 26 19:13:40 crc kubenswrapper[4770]: I0126 19:13:40.866829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" event={"ID":"608d349d-127c-4f0b-9a56-0368dcd0e46f","Type":"ContainerStarted","Data":"c8ef0496875ce71d8da132053ea77ac9d6e5e8146b7e510baac46705bd24e508"} Jan 26 19:13:40 crc kubenswrapper[4770]: I0126 19:13:40.867468 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" event={"ID":"608d349d-127c-4f0b-9a56-0368dcd0e46f","Type":"ContainerStarted","Data":"7255eaf99c54d7a9c1f1533f3b61d23eb0a4249d0c625ef848b8a6ea98742b32"} Jan 26 19:13:40 crc kubenswrapper[4770]: I0126 19:13:40.888156 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" podStartSLOduration=2.22933506 podStartE2EDuration="2.888132606s" podCreationTimestamp="2026-01-26 19:13:38 +0000 UTC" firstStartedPulling="2026-01-26 19:13:39.870629509 +0000 UTC m=+1904.435536251" lastFinishedPulling="2026-01-26 19:13:40.529427055 +0000 UTC m=+1905.094333797" observedRunningTime="2026-01-26 19:13:40.881960377 +0000 UTC m=+1905.446867119" watchObservedRunningTime="2026-01-26 19:13:40.888132606 +0000 UTC m=+1905.453039338" Jan 26 19:13:45 crc kubenswrapper[4770]: I0126 19:13:45.922959 4770 generic.go:334] "Generic (PLEG): container finished" podID="608d349d-127c-4f0b-9a56-0368dcd0e46f" containerID="c8ef0496875ce71d8da132053ea77ac9d6e5e8146b7e510baac46705bd24e508" exitCode=0 Jan 26 19:13:45 crc kubenswrapper[4770]: I0126 19:13:45.923068 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" event={"ID":"608d349d-127c-4f0b-9a56-0368dcd0e46f","Type":"ContainerDied","Data":"c8ef0496875ce71d8da132053ea77ac9d6e5e8146b7e510baac46705bd24e508"} Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.493333 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.611242 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory\") pod \"608d349d-127c-4f0b-9a56-0368dcd0e46f\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.611374 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam\") pod \"608d349d-127c-4f0b-9a56-0368dcd0e46f\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.611485 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk289\" (UniqueName: \"kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289\") pod \"608d349d-127c-4f0b-9a56-0368dcd0e46f\" (UID: \"608d349d-127c-4f0b-9a56-0368dcd0e46f\") " Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.619062 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289" (OuterVolumeSpecName: "kube-api-access-kk289") pod "608d349d-127c-4f0b-9a56-0368dcd0e46f" (UID: "608d349d-127c-4f0b-9a56-0368dcd0e46f"). InnerVolumeSpecName "kube-api-access-kk289". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.643790 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "608d349d-127c-4f0b-9a56-0368dcd0e46f" (UID: "608d349d-127c-4f0b-9a56-0368dcd0e46f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.647214 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory" (OuterVolumeSpecName: "inventory") pod "608d349d-127c-4f0b-9a56-0368dcd0e46f" (UID: "608d349d-127c-4f0b-9a56-0368dcd0e46f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.714090 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk289\" (UniqueName: \"kubernetes.io/projected/608d349d-127c-4f0b-9a56-0368dcd0e46f-kube-api-access-kk289\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.714126 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.714136 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/608d349d-127c-4f0b-9a56-0368dcd0e46f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.941188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" event={"ID":"608d349d-127c-4f0b-9a56-0368dcd0e46f","Type":"ContainerDied","Data":"7255eaf99c54d7a9c1f1533f3b61d23eb0a4249d0c625ef848b8a6ea98742b32"} Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.941228 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7255eaf99c54d7a9c1f1533f3b61d23eb0a4249d0c625ef848b8a6ea98742b32" Jan 26 19:13:47 crc kubenswrapper[4770]: I0126 19:13:47.941282 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-td8t6" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.035241 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk"] Jan 26 19:13:48 crc kubenswrapper[4770]: E0126 19:13:48.035873 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608d349d-127c-4f0b-9a56-0368dcd0e46f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.035899 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="608d349d-127c-4f0b-9a56-0368dcd0e46f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.036196 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="608d349d-127c-4f0b-9a56-0368dcd0e46f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.037095 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.039006 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.039859 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.040608 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.041599 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.047329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk"] Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.123869 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.124283 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.124848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cjt\" (UniqueName: \"kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.227052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.227199 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.227339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5cjt\" (UniqueName: \"kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.233381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.234400 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.252824 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5cjt\" (UniqueName: \"kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zd9mk\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.365756 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.934138 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk"] Jan 26 19:13:48 crc kubenswrapper[4770]: I0126 19:13:48.954212 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" event={"ID":"51afe695-3612-4c67-8f8f-d7cf1c927b20","Type":"ContainerStarted","Data":"e105acf4ed3d605b23661ef8cb9f727d74d239dd0447c63af25e8e4484113b17"} Jan 26 19:13:49 crc kubenswrapper[4770]: I0126 19:13:49.966102 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" event={"ID":"51afe695-3612-4c67-8f8f-d7cf1c927b20","Type":"ContainerStarted","Data":"778e035756b1f8faa500eedb0c93a63c3c651eea766061f38e0400edb40f6c4e"} Jan 26 19:13:49 crc kubenswrapper[4770]: I0126 19:13:49.985465 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" podStartSLOduration=1.561149262 podStartE2EDuration="1.985443261s" podCreationTimestamp="2026-01-26 19:13:48 +0000 UTC" firstStartedPulling="2026-01-26 19:13:48.930034284 +0000 UTC m=+1913.494941026" lastFinishedPulling="2026-01-26 19:13:49.354328253 +0000 UTC m=+1913.919235025" observedRunningTime="2026-01-26 19:13:49.983472057 +0000 UTC m=+1914.548378799" watchObservedRunningTime="2026-01-26 19:13:49.985443261 +0000 UTC m=+1914.550350003" Jan 26 19:13:53 crc kubenswrapper[4770]: I0126 19:13:53.048886 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2gtrl"] Jan 26 19:13:53 crc kubenswrapper[4770]: I0126 19:13:53.057811 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2gtrl"] Jan 26 19:13:53 crc kubenswrapper[4770]: I0126 19:13:53.778308 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c85e81-cbf1-4b3e-9012-f8f10e74021e" path="/var/lib/kubelet/pods/c6c85e81-cbf1-4b3e-9012-f8f10e74021e/volumes" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.146820 4770 scope.go:117] "RemoveContainer" containerID="0b7b6ee028f0271cf8cfaa70cc68a0adeef7ec1936021f014b9dac8044ec552d" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.178474 4770 scope.go:117] "RemoveContainer" containerID="2b9314a446cdd3e7b8d85b59cf7dd678d25ace9f96b03abd5300bf3073df7ae9" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.238965 4770 scope.go:117] "RemoveContainer" containerID="7b22c4f20b0d5eac0d1b6f2237e05c7984161037fc68a653ac02a964e7836828" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.291924 4770 scope.go:117] "RemoveContainer" containerID="8c1d41495873e40bf971a45bdd81011b56d5b0bd8239fcac41c6ebfd740533e3" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.357182 4770 scope.go:117] "RemoveContainer" containerID="01e777708fcfc3c4293f9b2c3e1c9617e79f40232f9110f5bef098ade1ff8fff" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.382711 4770 scope.go:117] "RemoveContainer" containerID="17bfda71737f913902517af58328ac4019e1ef77155eb6c3e698bcc8238818ea" Jan 26 19:14:19 crc kubenswrapper[4770]: I0126 19:14:19.431903 4770 scope.go:117] "RemoveContainer" containerID="5959dce8bdc67c24e85004c3dc5fdf371effa985fca55e454c3963b96319c268" Jan 26 19:14:22 crc kubenswrapper[4770]: I0126 19:14:22.057690 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rcvbt"] Jan 26 19:14:22 crc kubenswrapper[4770]: I0126 19:14:22.073299 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rcvbt"] Jan 26 19:14:23 crc kubenswrapper[4770]: I0126 19:14:23.780452 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf8c60c-ba0f-4c3e-8df1-8323360857b5" path="/var/lib/kubelet/pods/0cf8c60c-ba0f-4c3e-8df1-8323360857b5/volumes" Jan 26 19:14:26 crc kubenswrapper[4770]: I0126 19:14:26.038667 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-69wnj"] Jan 26 19:14:26 crc kubenswrapper[4770]: I0126 19:14:26.046973 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-69wnj"] Jan 26 19:14:27 crc kubenswrapper[4770]: I0126 19:14:27.790153 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07378f5-6d68-438a-8bd0-01b033da7b25" path="/var/lib/kubelet/pods/d07378f5-6d68-438a-8bd0-01b033da7b25/volumes" Jan 26 19:14:33 crc kubenswrapper[4770]: I0126 19:14:33.390127 4770 generic.go:334] "Generic (PLEG): container finished" podID="51afe695-3612-4c67-8f8f-d7cf1c927b20" containerID="778e035756b1f8faa500eedb0c93a63c3c651eea766061f38e0400edb40f6c4e" exitCode=0 Jan 26 19:14:33 crc kubenswrapper[4770]: I0126 19:14:33.390283 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" event={"ID":"51afe695-3612-4c67-8f8f-d7cf1c927b20","Type":"ContainerDied","Data":"778e035756b1f8faa500eedb0c93a63c3c651eea766061f38e0400edb40f6c4e"} Jan 26 19:14:34 crc kubenswrapper[4770]: I0126 19:14:34.951632 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.082407 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5cjt\" (UniqueName: \"kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt\") pod \"51afe695-3612-4c67-8f8f-d7cf1c927b20\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.082503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam\") pod \"51afe695-3612-4c67-8f8f-d7cf1c927b20\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.082818 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory\") pod \"51afe695-3612-4c67-8f8f-d7cf1c927b20\" (UID: \"51afe695-3612-4c67-8f8f-d7cf1c927b20\") " Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.090005 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt" (OuterVolumeSpecName: "kube-api-access-j5cjt") pod "51afe695-3612-4c67-8f8f-d7cf1c927b20" (UID: "51afe695-3612-4c67-8f8f-d7cf1c927b20"). InnerVolumeSpecName "kube-api-access-j5cjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.167903 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51afe695-3612-4c67-8f8f-d7cf1c927b20" (UID: "51afe695-3612-4c67-8f8f-d7cf1c927b20"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.169296 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory" (OuterVolumeSpecName: "inventory") pod "51afe695-3612-4c67-8f8f-d7cf1c927b20" (UID: "51afe695-3612-4c67-8f8f-d7cf1c927b20"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.186054 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5cjt\" (UniqueName: \"kubernetes.io/projected/51afe695-3612-4c67-8f8f-d7cf1c927b20-kube-api-access-j5cjt\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.186085 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.186098 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51afe695-3612-4c67-8f8f-d7cf1c927b20-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.411655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" event={"ID":"51afe695-3612-4c67-8f8f-d7cf1c927b20","Type":"ContainerDied","Data":"e105acf4ed3d605b23661ef8cb9f727d74d239dd0447c63af25e8e4484113b17"} Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.411708 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zd9mk" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.411738 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e105acf4ed3d605b23661ef8cb9f727d74d239dd0447c63af25e8e4484113b17" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.523076 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl"] Jan 26 19:14:35 crc kubenswrapper[4770]: E0126 19:14:35.523658 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51afe695-3612-4c67-8f8f-d7cf1c927b20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.523682 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="51afe695-3612-4c67-8f8f-d7cf1c927b20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.524049 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="51afe695-3612-4c67-8f8f-d7cf1c927b20" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.524978 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.527802 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.528127 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.528132 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.528299 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.549777 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl"] Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.598437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.598758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.598976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvhw\" (UniqueName: \"kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.702150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.702412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.702556 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvhw\" (UniqueName: \"kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.708250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.708889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.721479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvhw\" (UniqueName: \"kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:35 crc kubenswrapper[4770]: I0126 19:14:35.856177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:14:36 crc kubenswrapper[4770]: I0126 19:14:36.454959 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl"] Jan 26 19:14:37 crc kubenswrapper[4770]: I0126 19:14:37.449076 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" event={"ID":"f2cab92c-6548-4bab-82d8-f9cc534b88a8","Type":"ContainerStarted","Data":"85ba63398c08d0a7b90ef76363be7170647b26e3f1153d98ac8bbd35fbf68aa8"} Jan 26 19:14:37 crc kubenswrapper[4770]: I0126 19:14:37.449467 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" event={"ID":"f2cab92c-6548-4bab-82d8-f9cc534b88a8","Type":"ContainerStarted","Data":"73079bf95efc13b98c553eb0fa2fc3a2cfcd0d58b979c00da945c4fdec528ede"} Jan 26 19:14:37 crc kubenswrapper[4770]: I0126 19:14:37.471958 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" podStartSLOduration=2.048121585 podStartE2EDuration="2.471934871s" podCreationTimestamp="2026-01-26 19:14:35 +0000 UTC" firstStartedPulling="2026-01-26 19:14:36.465175798 +0000 UTC m=+1961.030082530" lastFinishedPulling="2026-01-26 19:14:36.888989074 +0000 UTC m=+1961.453895816" observedRunningTime="2026-01-26 19:14:37.467366916 +0000 UTC m=+1962.032273658" watchObservedRunningTime="2026-01-26 19:14:37.471934871 +0000 UTC m=+1962.036841593" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.139665 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd"] Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.141917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.144217 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.144292 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.159568 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd"] Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.277190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.277610 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.277776 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whbs\" (UniqueName: \"kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.379777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.379889 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.379910 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whbs\" (UniqueName: \"kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.381176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.387162 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.395642 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whbs\" (UniqueName: \"kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs\") pod \"collect-profiles-29490915-rlvtd\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.526913 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:00 crc kubenswrapper[4770]: I0126 19:15:00.994161 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd"] Jan 26 19:15:00 crc kubenswrapper[4770]: W0126 19:15:00.998669 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77c65537_9b89_449e_8f1f_8036841225f2.slice/crio-835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8 WatchSource:0}: Error finding container 835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8: Status 404 returned error can't find the container with id 835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8 Jan 26 19:15:01 crc kubenswrapper[4770]: I0126 19:15:01.708690 4770 generic.go:334] "Generic (PLEG): container finished" podID="77c65537-9b89-449e-8f1f-8036841225f2" containerID="ff8fea5932d0f4cbd70dc9f75ce204653a4482359c3e91b21c3ec99cd4968449" exitCode=0 Jan 26 19:15:01 crc kubenswrapper[4770]: I0126 19:15:01.708815 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" event={"ID":"77c65537-9b89-449e-8f1f-8036841225f2","Type":"ContainerDied","Data":"ff8fea5932d0f4cbd70dc9f75ce204653a4482359c3e91b21c3ec99cd4968449"} Jan 26 19:15:01 crc kubenswrapper[4770]: I0126 19:15:01.709030 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" event={"ID":"77c65537-9b89-449e-8f1f-8036841225f2","Type":"ContainerStarted","Data":"835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8"} Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.091811 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.137509 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whbs\" (UniqueName: \"kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs\") pod \"77c65537-9b89-449e-8f1f-8036841225f2\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.137761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume\") pod \"77c65537-9b89-449e-8f1f-8036841225f2\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.137853 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume\") pod \"77c65537-9b89-449e-8f1f-8036841225f2\" (UID: \"77c65537-9b89-449e-8f1f-8036841225f2\") " Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.138559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume" (OuterVolumeSpecName: "config-volume") pod "77c65537-9b89-449e-8f1f-8036841225f2" (UID: "77c65537-9b89-449e-8f1f-8036841225f2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.143774 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77c65537-9b89-449e-8f1f-8036841225f2" (UID: "77c65537-9b89-449e-8f1f-8036841225f2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.144864 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs" (OuterVolumeSpecName: "kube-api-access-4whbs") pod "77c65537-9b89-449e-8f1f-8036841225f2" (UID: "77c65537-9b89-449e-8f1f-8036841225f2"). InnerVolumeSpecName "kube-api-access-4whbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.241092 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c65537-9b89-449e-8f1f-8036841225f2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.241453 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77c65537-9b89-449e-8f1f-8036841225f2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.241583 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whbs\" (UniqueName: \"kubernetes.io/projected/77c65537-9b89-449e-8f1f-8036841225f2-kube-api-access-4whbs\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.732183 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" event={"ID":"77c65537-9b89-449e-8f1f-8036841225f2","Type":"ContainerDied","Data":"835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8"} Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.732549 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835744c19d9a5b85dc38ed006f0ea53c000490944c3beda6d6b19a8ba965ced8" Jan 26 19:15:03 crc kubenswrapper[4770]: I0126 19:15:03.732273 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd" Jan 26 19:15:04 crc kubenswrapper[4770]: I0126 19:15:04.170380 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv"] Jan 26 19:15:04 crc kubenswrapper[4770]: I0126 19:15:04.178254 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490870-vl9jv"] Jan 26 19:15:05 crc kubenswrapper[4770]: I0126 19:15:05.792160 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99112e4-bf15-412c-89dd-a68b4bd43dd5" path="/var/lib/kubelet/pods/c99112e4-bf15-412c-89dd-a68b4bd43dd5/volumes" Jan 26 19:15:06 crc kubenswrapper[4770]: I0126 19:15:06.059391 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n6fsv"] Jan 26 19:15:06 crc kubenswrapper[4770]: I0126 19:15:06.069615 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n6fsv"] Jan 26 19:15:07 crc kubenswrapper[4770]: I0126 19:15:07.792293 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79133e5-1315-4728-bbb2-7ad2912ed30b" path="/var/lib/kubelet/pods/f79133e5-1315-4728-bbb2-7ad2912ed30b/volumes" Jan 26 19:15:19 crc kubenswrapper[4770]: I0126 19:15:19.564533 4770 scope.go:117] "RemoveContainer" containerID="2cbf6224b896fc14ff17249793441fef1e367b87b9bef13ac30656df6f8de035" Jan 26 19:15:19 crc kubenswrapper[4770]: I0126 19:15:19.627678 4770 scope.go:117] "RemoveContainer" containerID="053c2fdfbeef62642b578d4e70b6f4f9d45ab589ce2dc90fbb864da580ff79ce" Jan 26 19:15:19 crc kubenswrapper[4770]: I0126 19:15:19.684371 4770 scope.go:117] "RemoveContainer" containerID="b0722d76b8af30b4179059dd64b413d06163f7e4f5e20eedde53dce53362e5a0" Jan 26 19:15:19 crc kubenswrapper[4770]: I0126 19:15:19.752436 4770 scope.go:117] "RemoveContainer" containerID="979e8eedb42cf5c4b771d1bb67c11ee682fba3e594bc07d121a304427ee27269" Jan 26 19:15:30 crc kubenswrapper[4770]: I0126 19:15:30.330533 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:15:30 crc kubenswrapper[4770]: I0126 19:15:30.331375 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:15:35 crc kubenswrapper[4770]: I0126 19:15:35.090509 4770 generic.go:334] "Generic (PLEG): container finished" podID="f2cab92c-6548-4bab-82d8-f9cc534b88a8" containerID="85ba63398c08d0a7b90ef76363be7170647b26e3f1153d98ac8bbd35fbf68aa8" exitCode=0 Jan 26 19:15:35 crc kubenswrapper[4770]: I0126 19:15:35.090643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" event={"ID":"f2cab92c-6548-4bab-82d8-f9cc534b88a8","Type":"ContainerDied","Data":"85ba63398c08d0a7b90ef76363be7170647b26e3f1153d98ac8bbd35fbf68aa8"} Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.543281 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.635869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam\") pod \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.636255 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mvhw\" (UniqueName: \"kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw\") pod \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.636421 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory\") pod \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\" (UID: \"f2cab92c-6548-4bab-82d8-f9cc534b88a8\") " Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.642073 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw" (OuterVolumeSpecName: "kube-api-access-2mvhw") pod "f2cab92c-6548-4bab-82d8-f9cc534b88a8" (UID: "f2cab92c-6548-4bab-82d8-f9cc534b88a8"). InnerVolumeSpecName "kube-api-access-2mvhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.664637 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f2cab92c-6548-4bab-82d8-f9cc534b88a8" (UID: "f2cab92c-6548-4bab-82d8-f9cc534b88a8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.670674 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory" (OuterVolumeSpecName: "inventory") pod "f2cab92c-6548-4bab-82d8-f9cc534b88a8" (UID: "f2cab92c-6548-4bab-82d8-f9cc534b88a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.742655 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.742689 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2cab92c-6548-4bab-82d8-f9cc534b88a8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:36 crc kubenswrapper[4770]: I0126 19:15:36.742721 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mvhw\" (UniqueName: \"kubernetes.io/projected/f2cab92c-6548-4bab-82d8-f9cc534b88a8-kube-api-access-2mvhw\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.112518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" event={"ID":"f2cab92c-6548-4bab-82d8-f9cc534b88a8","Type":"ContainerDied","Data":"73079bf95efc13b98c553eb0fa2fc3a2cfcd0d58b979c00da945c4fdec528ede"} Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.112559 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73079bf95efc13b98c553eb0fa2fc3a2cfcd0d58b979c00da945c4fdec528ede" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.112612 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.265043 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vm5mv"] Jan 26 19:15:37 crc kubenswrapper[4770]: E0126 19:15:37.265460 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c65537-9b89-449e-8f1f-8036841225f2" containerName="collect-profiles" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.265479 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c65537-9b89-449e-8f1f-8036841225f2" containerName="collect-profiles" Jan 26 19:15:37 crc kubenswrapper[4770]: E0126 19:15:37.265507 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cab92c-6548-4bab-82d8-f9cc534b88a8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.265516 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cab92c-6548-4bab-82d8-f9cc534b88a8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.265767 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c65537-9b89-449e-8f1f-8036841225f2" containerName="collect-profiles" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.265791 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cab92c-6548-4bab-82d8-f9cc534b88a8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.269871 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.278459 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.278751 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.279000 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.279249 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.289314 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vm5mv"] Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.458244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.458496 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.458865 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km4m\" (UniqueName: \"kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.561539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.562039 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km4m\" (UniqueName: \"kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.562161 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.568988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.575157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.579647 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km4m\" (UniqueName: \"kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m\") pod \"ssh-known-hosts-edpm-deployment-vm5mv\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:37 crc kubenswrapper[4770]: I0126 19:15:37.609736 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:38 crc kubenswrapper[4770]: I0126 19:15:38.157965 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vm5mv"] Jan 26 19:15:38 crc kubenswrapper[4770]: W0126 19:15:38.160853 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f5964e6_f0a0_459a_a754_dcefc5a6ee69.slice/crio-da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391 WatchSource:0}: Error finding container da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391: Status 404 returned error can't find the container with id da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391 Jan 26 19:15:38 crc kubenswrapper[4770]: I0126 19:15:38.163331 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:15:39 crc kubenswrapper[4770]: I0126 19:15:39.132218 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" event={"ID":"5f5964e6-f0a0-459a-a754-dcefc5a6ee69","Type":"ContainerStarted","Data":"6ceebeb5b666bbe47a5f9a1d7f1924a3e0e32df91a80277a93b9cdc113d5f7ac"} Jan 26 19:15:39 crc kubenswrapper[4770]: I0126 19:15:39.132592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" event={"ID":"5f5964e6-f0a0-459a-a754-dcefc5a6ee69","Type":"ContainerStarted","Data":"da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391"} Jan 26 19:15:39 crc kubenswrapper[4770]: I0126 19:15:39.153991 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" podStartSLOduration=1.587040475 podStartE2EDuration="2.153972873s" podCreationTimestamp="2026-01-26 19:15:37 +0000 UTC" firstStartedPulling="2026-01-26 19:15:38.163128876 +0000 UTC m=+2022.728035608" lastFinishedPulling="2026-01-26 19:15:38.730061274 +0000 UTC m=+2023.294968006" observedRunningTime="2026-01-26 19:15:39.146237621 +0000 UTC m=+2023.711144353" watchObservedRunningTime="2026-01-26 19:15:39.153972873 +0000 UTC m=+2023.718879605" Jan 26 19:15:47 crc kubenswrapper[4770]: I0126 19:15:47.220271 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f5964e6-f0a0-459a-a754-dcefc5a6ee69" containerID="6ceebeb5b666bbe47a5f9a1d7f1924a3e0e32df91a80277a93b9cdc113d5f7ac" exitCode=0 Jan 26 19:15:47 crc kubenswrapper[4770]: I0126 19:15:47.220477 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" event={"ID":"5f5964e6-f0a0-459a-a754-dcefc5a6ee69","Type":"ContainerDied","Data":"6ceebeb5b666bbe47a5f9a1d7f1924a3e0e32df91a80277a93b9cdc113d5f7ac"} Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.732288 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.890170 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8km4m\" (UniqueName: \"kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m\") pod \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.890626 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0\") pod \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.890775 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam\") pod \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\" (UID: \"5f5964e6-f0a0-459a-a754-dcefc5a6ee69\") " Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.896029 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m" (OuterVolumeSpecName: "kube-api-access-8km4m") pod "5f5964e6-f0a0-459a-a754-dcefc5a6ee69" (UID: "5f5964e6-f0a0-459a-a754-dcefc5a6ee69"). InnerVolumeSpecName "kube-api-access-8km4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.929345 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5f5964e6-f0a0-459a-a754-dcefc5a6ee69" (UID: "5f5964e6-f0a0-459a-a754-dcefc5a6ee69"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.941638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f5964e6-f0a0-459a-a754-dcefc5a6ee69" (UID: "5f5964e6-f0a0-459a-a754-dcefc5a6ee69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.994194 4770 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.994276 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:48 crc kubenswrapper[4770]: I0126 19:15:48.994310 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8km4m\" (UniqueName: \"kubernetes.io/projected/5f5964e6-f0a0-459a-a754-dcefc5a6ee69-kube-api-access-8km4m\") on node \"crc\" DevicePath \"\"" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.246955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" event={"ID":"5f5964e6-f0a0-459a-a754-dcefc5a6ee69","Type":"ContainerDied","Data":"da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391"} Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.247020 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da881dd9ef9e3ef9fd22a7577ee1338eb668617d4cd53845b58f71c1eadea391" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.247101 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vm5mv" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.343179 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s"] Jan 26 19:15:49 crc kubenswrapper[4770]: E0126 19:15:49.343914 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5964e6-f0a0-459a-a754-dcefc5a6ee69" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.344007 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5964e6-f0a0-459a-a754-dcefc5a6ee69" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.344315 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5964e6-f0a0-459a-a754-dcefc5a6ee69" containerName="ssh-known-hosts-edpm-deployment" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.345067 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.353183 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.353573 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.353836 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.353993 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.367544 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s"] Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.507150 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.507326 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.507727 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwf9\" (UniqueName: \"kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.609392 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwf9\" (UniqueName: \"kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.609484 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.609518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.614542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.616607 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.636145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwf9\" (UniqueName: \"kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6962s\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:49 crc kubenswrapper[4770]: I0126 19:15:49.668905 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:15:50 crc kubenswrapper[4770]: I0126 19:15:50.251885 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s"] Jan 26 19:15:50 crc kubenswrapper[4770]: I0126 19:15:50.264248 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" event={"ID":"01d59985-d42f-42a7-9af0-01420a06b702","Type":"ContainerStarted","Data":"204067ab9a99762edce8885e28660f246e45ab01292544f54954c447c0c41605"} Jan 26 19:15:51 crc kubenswrapper[4770]: I0126 19:15:51.279317 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" event={"ID":"01d59985-d42f-42a7-9af0-01420a06b702","Type":"ContainerStarted","Data":"541fdd815aec89b64e9187ed90859d77ee7a0b450228839959ef0ee580557a11"} Jan 26 19:15:51 crc kubenswrapper[4770]: I0126 19:15:51.302779 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" podStartSLOduration=1.823276584 podStartE2EDuration="2.302751145s" podCreationTimestamp="2026-01-26 19:15:49 +0000 UTC" firstStartedPulling="2026-01-26 19:15:50.253581519 +0000 UTC m=+2034.818488261" lastFinishedPulling="2026-01-26 19:15:50.73305608 +0000 UTC m=+2035.297962822" observedRunningTime="2026-01-26 19:15:51.295994559 +0000 UTC m=+2035.860901301" watchObservedRunningTime="2026-01-26 19:15:51.302751145 +0000 UTC m=+2035.867657917" Jan 26 19:16:00 crc kubenswrapper[4770]: I0126 19:16:00.330970 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:16:00 crc kubenswrapper[4770]: I0126 19:16:00.331720 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:16:00 crc kubenswrapper[4770]: I0126 19:16:00.396532 4770 generic.go:334] "Generic (PLEG): container finished" podID="01d59985-d42f-42a7-9af0-01420a06b702" containerID="541fdd815aec89b64e9187ed90859d77ee7a0b450228839959ef0ee580557a11" exitCode=0 Jan 26 19:16:00 crc kubenswrapper[4770]: I0126 19:16:00.396589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" event={"ID":"01d59985-d42f-42a7-9af0-01420a06b702","Type":"ContainerDied","Data":"541fdd815aec89b64e9187ed90859d77ee7a0b450228839959ef0ee580557a11"} Jan 26 19:16:01 crc kubenswrapper[4770]: I0126 19:16:01.959038 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.093492 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory\") pod \"01d59985-d42f-42a7-9af0-01420a06b702\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.093557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpwf9\" (UniqueName: \"kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9\") pod \"01d59985-d42f-42a7-9af0-01420a06b702\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.093810 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam\") pod \"01d59985-d42f-42a7-9af0-01420a06b702\" (UID: \"01d59985-d42f-42a7-9af0-01420a06b702\") " Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.102467 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9" (OuterVolumeSpecName: "kube-api-access-zpwf9") pod "01d59985-d42f-42a7-9af0-01420a06b702" (UID: "01d59985-d42f-42a7-9af0-01420a06b702"). InnerVolumeSpecName "kube-api-access-zpwf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.148360 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory" (OuterVolumeSpecName: "inventory") pod "01d59985-d42f-42a7-9af0-01420a06b702" (UID: "01d59985-d42f-42a7-9af0-01420a06b702"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.149736 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01d59985-d42f-42a7-9af0-01420a06b702" (UID: "01d59985-d42f-42a7-9af0-01420a06b702"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.196050 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.196093 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpwf9\" (UniqueName: \"kubernetes.io/projected/01d59985-d42f-42a7-9af0-01420a06b702-kube-api-access-zpwf9\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.196111 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01d59985-d42f-42a7-9af0-01420a06b702-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.422947 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" event={"ID":"01d59985-d42f-42a7-9af0-01420a06b702","Type":"ContainerDied","Data":"204067ab9a99762edce8885e28660f246e45ab01292544f54954c447c0c41605"} Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.422991 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204067ab9a99762edce8885e28660f246e45ab01292544f54954c447c0c41605" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.423018 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6962s" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.552052 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j"] Jan 26 19:16:02 crc kubenswrapper[4770]: E0126 19:16:02.552865 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d59985-d42f-42a7-9af0-01420a06b702" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.552909 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d59985-d42f-42a7-9af0-01420a06b702" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.553422 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d59985-d42f-42a7-9af0-01420a06b702" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.555072 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.557521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.557686 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.558895 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.559318 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.565269 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j"] Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.708036 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.708245 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.708449 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnjvl\" (UniqueName: \"kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.810591 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.810851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnjvl\" (UniqueName: \"kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.811037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.818548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.826245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.833048 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnjvl\" (UniqueName: \"kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:02 crc kubenswrapper[4770]: I0126 19:16:02.897189 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:03 crc kubenswrapper[4770]: I0126 19:16:03.668917 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j"] Jan 26 19:16:04 crc kubenswrapper[4770]: I0126 19:16:04.446600 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" event={"ID":"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e","Type":"ContainerStarted","Data":"dcd922c2520bebfb78f52841aef54bb21f19a32f159550a830f5d2fa11ee2cc5"} Jan 26 19:16:04 crc kubenswrapper[4770]: I0126 19:16:04.446891 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" event={"ID":"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e","Type":"ContainerStarted","Data":"53a02885afe1c382a0e19219c63c2c7cd14ca689553835c4414947712ee258f4"} Jan 26 19:16:04 crc kubenswrapper[4770]: I0126 19:16:04.465779 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" podStartSLOduration=2.03872855 podStartE2EDuration="2.465742844s" podCreationTimestamp="2026-01-26 19:16:02 +0000 UTC" firstStartedPulling="2026-01-26 19:16:03.675381292 +0000 UTC m=+2048.240288024" lastFinishedPulling="2026-01-26 19:16:04.102395576 +0000 UTC m=+2048.667302318" observedRunningTime="2026-01-26 19:16:04.459978096 +0000 UTC m=+2049.024884898" watchObservedRunningTime="2026-01-26 19:16:04.465742844 +0000 UTC m=+2049.030649616" Jan 26 19:16:14 crc kubenswrapper[4770]: I0126 19:16:14.553529 4770 generic.go:334] "Generic (PLEG): container finished" podID="4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" containerID="dcd922c2520bebfb78f52841aef54bb21f19a32f159550a830f5d2fa11ee2cc5" exitCode=0 Jan 26 19:16:14 crc kubenswrapper[4770]: I0126 19:16:14.553620 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" event={"ID":"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e","Type":"ContainerDied","Data":"dcd922c2520bebfb78f52841aef54bb21f19a32f159550a830f5d2fa11ee2cc5"} Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.049442 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.128012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory\") pod \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.129287 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnjvl\" (UniqueName: \"kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl\") pod \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.129583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam\") pod \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\" (UID: \"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e\") " Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.134937 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl" (OuterVolumeSpecName: "kube-api-access-qnjvl") pod "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" (UID: "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e"). InnerVolumeSpecName "kube-api-access-qnjvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.163014 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory" (OuterVolumeSpecName: "inventory") pod "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" (UID: "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.166916 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" (UID: "4fdf356a-1a71-4b6f-92aa-c2c3a963f28e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.231920 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.232081 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnjvl\" (UniqueName: \"kubernetes.io/projected/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-kube-api-access-qnjvl\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.232140 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdf356a-1a71-4b6f-92aa-c2c3a963f28e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.576016 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" event={"ID":"4fdf356a-1a71-4b6f-92aa-c2c3a963f28e","Type":"ContainerDied","Data":"53a02885afe1c382a0e19219c63c2c7cd14ca689553835c4414947712ee258f4"} Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.576066 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53a02885afe1c382a0e19219c63c2c7cd14ca689553835c4414947712ee258f4" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.576080 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.719868 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr"] Jan 26 19:16:16 crc kubenswrapper[4770]: E0126 19:16:16.720257 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.720275 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.720470 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdf356a-1a71-4b6f-92aa-c2c3a963f28e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.721130 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.726724 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.726845 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.726859 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.726978 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.727231 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.727386 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.728589 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.728666 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.743580 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr"] Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.845819 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846192 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846234 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846294 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846328 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mct\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846390 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846409 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846462 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846496 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.846569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948075 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948142 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948164 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948259 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mct\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948307 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948326 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948355 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948395 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.948458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.952612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.952738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.953362 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.954200 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.954658 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.955075 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.955486 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.955645 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.956827 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.963628 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.964434 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.964691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.966056 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mct\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:16 crc kubenswrapper[4770]: I0126 19:16:16.967097 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:17 crc kubenswrapper[4770]: I0126 19:16:17.036530 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:16:17 crc kubenswrapper[4770]: I0126 19:16:17.603580 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr"] Jan 26 19:16:18 crc kubenswrapper[4770]: I0126 19:16:18.599722 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" event={"ID":"514407e1-deb8-4ac4-bf0e-9b93842cb8f9","Type":"ContainerStarted","Data":"334c7041c08e9f9175769c84793042a596140c47475a025636a21b6685ad580b"} Jan 26 19:16:18 crc kubenswrapper[4770]: I0126 19:16:18.600404 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" event={"ID":"514407e1-deb8-4ac4-bf0e-9b93842cb8f9","Type":"ContainerStarted","Data":"0c9abecdc96e75e9a55f93046e70a5613b39c12b4ef7b88a32c31dddf4a37cdd"} Jan 26 19:16:18 crc kubenswrapper[4770]: I0126 19:16:18.624509 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" podStartSLOduration=2.159272366 podStartE2EDuration="2.624491947s" podCreationTimestamp="2026-01-26 19:16:16 +0000 UTC" firstStartedPulling="2026-01-26 19:16:17.615108291 +0000 UTC m=+2062.180015033" lastFinishedPulling="2026-01-26 19:16:18.080327882 +0000 UTC m=+2062.645234614" observedRunningTime="2026-01-26 19:16:18.623563141 +0000 UTC m=+2063.188469883" watchObservedRunningTime="2026-01-26 19:16:18.624491947 +0000 UTC m=+2063.189398679" Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.330288 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.330864 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.330943 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.332178 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.332288 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641" gracePeriod=600 Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.720922 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641" exitCode=0 Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.720998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641"} Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.721354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b"} Jan 26 19:16:30 crc kubenswrapper[4770]: I0126 19:16:30.721428 4770 scope.go:117] "RemoveContainer" containerID="0c799035798bba8009d7267e3054e800aa985af1245393d9b92ff9f3c2f56aa3" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.067525 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.071237 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.104004 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.157850 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.157914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.158232 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsll\" (UniqueName: \"kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.259645 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsll\" (UniqueName: \"kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.259768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.259818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.260481 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.260912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.290466 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsll\" (UniqueName: \"kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll\") pod \"redhat-operators-gtkww\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.391026 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.874362 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:16:45 crc kubenswrapper[4770]: I0126 19:16:45.904761 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerStarted","Data":"01e5d244b47e8e3a8a1b2df237408e7a65b44eedd3d0fda774ee797f11a98ebf"} Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.870137 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.873550 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.895861 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.917134 4770 generic.go:334] "Generic (PLEG): container finished" podID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerID="498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa" exitCode=0 Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.917186 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerDied","Data":"498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa"} Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.995560 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.995625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:46 crc kubenswrapper[4770]: I0126 19:16:46.995811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvkn\" (UniqueName: \"kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.097630 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsvkn\" (UniqueName: \"kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.097863 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.097905 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.098447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.098509 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.131117 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsvkn\" (UniqueName: \"kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn\") pod \"certified-operators-wlkq4\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.204709 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.814204 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.928997 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerStarted","Data":"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9"} Jan 26 19:16:47 crc kubenswrapper[4770]: I0126 19:16:47.930809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerStarted","Data":"533dee5e8e5c2ae75984e000ac737586ae87e3211528b7e2f50f6731d608d2aa"} Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.257643 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.262319 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.277163 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.327264 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxjd\" (UniqueName: \"kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.327675 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.327767 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.429613 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.429759 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxjd\" (UniqueName: \"kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.429804 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.430260 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.431552 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.452444 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxjd\" (UniqueName: \"kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd\") pod \"redhat-marketplace-sqsqq\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.598232 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.926625 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.942715 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerStarted","Data":"858335e342c14df8e06b843c7d337cbf689ffdb6f7822e282b89fc2158684a5c"} Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.944009 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerID="347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849" exitCode=0 Jan 26 19:16:48 crc kubenswrapper[4770]: I0126 19:16:48.945058 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerDied","Data":"347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849"} Jan 26 19:16:49 crc kubenswrapper[4770]: I0126 19:16:49.959772 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerStarted","Data":"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad"} Jan 26 19:16:50 crc kubenswrapper[4770]: I0126 19:16:50.982778 4770 generic.go:334] "Generic (PLEG): container finished" podID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerID="9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad" exitCode=0 Jan 26 19:16:50 crc kubenswrapper[4770]: I0126 19:16:50.982895 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerDied","Data":"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad"} Jan 26 19:16:50 crc kubenswrapper[4770]: I0126 19:16:50.988438 4770 generic.go:334] "Generic (PLEG): container finished" podID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerID="045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9" exitCode=0 Jan 26 19:16:50 crc kubenswrapper[4770]: I0126 19:16:50.988490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerDied","Data":"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9"} Jan 26 19:16:51 crc kubenswrapper[4770]: I0126 19:16:51.999121 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerID="a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0" exitCode=0 Jan 26 19:16:51 crc kubenswrapper[4770]: I0126 19:16:51.999230 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerDied","Data":"a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0"} Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.009252 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerStarted","Data":"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b"} Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.011760 4770 generic.go:334] "Generic (PLEG): container finished" podID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerID="849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e" exitCode=0 Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.011812 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerDied","Data":"849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e"} Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.015394 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerStarted","Data":"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162"} Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.035035 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gtkww" podStartSLOduration=3.158502763 podStartE2EDuration="8.035020603s" podCreationTimestamp="2026-01-26 19:16:45 +0000 UTC" firstStartedPulling="2026-01-26 19:16:46.921018886 +0000 UTC m=+2091.485925618" lastFinishedPulling="2026-01-26 19:16:51.797536726 +0000 UTC m=+2096.362443458" observedRunningTime="2026-01-26 19:16:53.034218041 +0000 UTC m=+2097.599124773" watchObservedRunningTime="2026-01-26 19:16:53.035020603 +0000 UTC m=+2097.599927335" Jan 26 19:16:53 crc kubenswrapper[4770]: I0126 19:16:53.064872 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wlkq4" podStartSLOduration=3.5203027799999997 podStartE2EDuration="7.064854171s" podCreationTimestamp="2026-01-26 19:16:46 +0000 UTC" firstStartedPulling="2026-01-26 19:16:48.948835847 +0000 UTC m=+2093.513742579" lastFinishedPulling="2026-01-26 19:16:52.493387238 +0000 UTC m=+2097.058293970" observedRunningTime="2026-01-26 19:16:53.055833704 +0000 UTC m=+2097.620740436" watchObservedRunningTime="2026-01-26 19:16:53.064854171 +0000 UTC m=+2097.629760903" Jan 26 19:16:54 crc kubenswrapper[4770]: I0126 19:16:54.025592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerStarted","Data":"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044"} Jan 26 19:16:54 crc kubenswrapper[4770]: I0126 19:16:54.064652 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqsqq" podStartSLOduration=3.658171905 podStartE2EDuration="6.064635233s" podCreationTimestamp="2026-01-26 19:16:48 +0000 UTC" firstStartedPulling="2026-01-26 19:16:50.986819845 +0000 UTC m=+2095.551726587" lastFinishedPulling="2026-01-26 19:16:53.393283183 +0000 UTC m=+2097.958189915" observedRunningTime="2026-01-26 19:16:54.060752017 +0000 UTC m=+2098.625658749" watchObservedRunningTime="2026-01-26 19:16:54.064635233 +0000 UTC m=+2098.629541965" Jan 26 19:16:55 crc kubenswrapper[4770]: I0126 19:16:55.391607 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:55 crc kubenswrapper[4770]: I0126 19:16:55.392919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:16:56 crc kubenswrapper[4770]: I0126 19:16:56.460706 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gtkww" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="registry-server" probeResult="failure" output=< Jan 26 19:16:56 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 19:16:56 crc kubenswrapper[4770]: > Jan 26 19:16:57 crc kubenswrapper[4770]: I0126 19:16:57.205475 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:57 crc kubenswrapper[4770]: I0126 19:16:57.205602 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:57 crc kubenswrapper[4770]: I0126 19:16:57.268566 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:58 crc kubenswrapper[4770]: I0126 19:16:58.136288 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:16:58 crc kubenswrapper[4770]: I0126 19:16:58.598940 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:58 crc kubenswrapper[4770]: I0126 19:16:58.599332 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:58 crc kubenswrapper[4770]: I0126 19:16:58.647120 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:16:59 crc kubenswrapper[4770]: I0126 19:16:59.130379 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:17:01 crc kubenswrapper[4770]: I0126 19:17:01.446361 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:17:01 crc kubenswrapper[4770]: I0126 19:17:01.447087 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wlkq4" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="registry-server" containerID="cri-o://3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162" gracePeriod=2 Jan 26 19:17:01 crc kubenswrapper[4770]: I0126 19:17:01.982575 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.030646 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content\") pod \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.030781 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsvkn\" (UniqueName: \"kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn\") pod \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.030851 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities\") pod \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\" (UID: \"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.031744 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities" (OuterVolumeSpecName: "utilities") pod "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" (UID: "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.043092 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn" (OuterVolumeSpecName: "kube-api-access-bsvkn") pod "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" (UID: "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d"). InnerVolumeSpecName "kube-api-access-bsvkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.048467 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.049170 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqsqq" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="registry-server" containerID="cri-o://79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044" gracePeriod=2 Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.084873 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" (UID: "2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.117269 4770 generic.go:334] "Generic (PLEG): container finished" podID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerID="3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162" exitCode=0 Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.117298 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlkq4" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.117357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerDied","Data":"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162"} Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.117389 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlkq4" event={"ID":"2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d","Type":"ContainerDied","Data":"533dee5e8e5c2ae75984e000ac737586ae87e3211528b7e2f50f6731d608d2aa"} Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.117409 4770 scope.go:117] "RemoveContainer" containerID="3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.120249 4770 generic.go:334] "Generic (PLEG): container finished" podID="514407e1-deb8-4ac4-bf0e-9b93842cb8f9" containerID="334c7041c08e9f9175769c84793042a596140c47475a025636a21b6685ad580b" exitCode=0 Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.120305 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" event={"ID":"514407e1-deb8-4ac4-bf0e-9b93842cb8f9","Type":"ContainerDied","Data":"334c7041c08e9f9175769c84793042a596140c47475a025636a21b6685ad580b"} Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.132988 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.133205 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsvkn\" (UniqueName: \"kubernetes.io/projected/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-kube-api-access-bsvkn\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.133339 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.267135 4770 scope.go:117] "RemoveContainer" containerID="a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.285013 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.297406 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wlkq4"] Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.304760 4770 scope.go:117] "RemoveContainer" containerID="347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.327038 4770 scope.go:117] "RemoveContainer" containerID="3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162" Jan 26 19:17:02 crc kubenswrapper[4770]: E0126 19:17:02.327956 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162\": container with ID starting with 3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162 not found: ID does not exist" containerID="3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.327996 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162"} err="failed to get container status \"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162\": rpc error: code = NotFound desc = could not find container \"3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162\": container with ID starting with 3df35cc898ff3001bf79a1cce1edddf2d789d7dc2e79fa98954eae72105b2162 not found: ID does not exist" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.328039 4770 scope.go:117] "RemoveContainer" containerID="a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0" Jan 26 19:17:02 crc kubenswrapper[4770]: E0126 19:17:02.328494 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0\": container with ID starting with a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0 not found: ID does not exist" containerID="a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.328536 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0"} err="failed to get container status \"a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0\": rpc error: code = NotFound desc = could not find container \"a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0\": container with ID starting with a7a0f8d2fe5eeab2897e6a391ae1ff5f105f75761b06902ca09fa31fbc4cc2d0 not found: ID does not exist" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.328568 4770 scope.go:117] "RemoveContainer" containerID="347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849" Jan 26 19:17:02 crc kubenswrapper[4770]: E0126 19:17:02.328999 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849\": container with ID starting with 347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849 not found: ID does not exist" containerID="347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.329032 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849"} err="failed to get container status \"347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849\": rpc error: code = NotFound desc = could not find container \"347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849\": container with ID starting with 347bcbad67afda00c38291fdead4fa46dcc425e49e3bac0844fec07dfff13849 not found: ID does not exist" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.420039 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.541935 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content\") pod \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.542892 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities\") pod \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.543109 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svxjd\" (UniqueName: \"kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd\") pod \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\" (UID: \"8d8ca055-fd79-41e5-a48e-cebd6bdae263\") " Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.543804 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities" (OuterVolumeSpecName: "utilities") pod "8d8ca055-fd79-41e5-a48e-cebd6bdae263" (UID: "8d8ca055-fd79-41e5-a48e-cebd6bdae263"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.544536 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.547955 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd" (OuterVolumeSpecName: "kube-api-access-svxjd") pod "8d8ca055-fd79-41e5-a48e-cebd6bdae263" (UID: "8d8ca055-fd79-41e5-a48e-cebd6bdae263"). InnerVolumeSpecName "kube-api-access-svxjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.564664 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d8ca055-fd79-41e5-a48e-cebd6bdae263" (UID: "8d8ca055-fd79-41e5-a48e-cebd6bdae263"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.646952 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d8ca055-fd79-41e5-a48e-cebd6bdae263-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:02 crc kubenswrapper[4770]: I0126 19:17:02.646983 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svxjd\" (UniqueName: \"kubernetes.io/projected/8d8ca055-fd79-41e5-a48e-cebd6bdae263-kube-api-access-svxjd\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.134207 4770 generic.go:334] "Generic (PLEG): container finished" podID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerID="79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044" exitCode=0 Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.134394 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerDied","Data":"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044"} Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.134470 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqsqq" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.134638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqsqq" event={"ID":"8d8ca055-fd79-41e5-a48e-cebd6bdae263","Type":"ContainerDied","Data":"858335e342c14df8e06b843c7d337cbf689ffdb6f7822e282b89fc2158684a5c"} Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.134676 4770 scope.go:117] "RemoveContainer" containerID="79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.172980 4770 scope.go:117] "RemoveContainer" containerID="849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.186514 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.199163 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqsqq"] Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.205031 4770 scope.go:117] "RemoveContainer" containerID="9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.255168 4770 scope.go:117] "RemoveContainer" containerID="79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044" Jan 26 19:17:03 crc kubenswrapper[4770]: E0126 19:17:03.255667 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044\": container with ID starting with 79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044 not found: ID does not exist" containerID="79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.255735 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044"} err="failed to get container status \"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044\": rpc error: code = NotFound desc = could not find container \"79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044\": container with ID starting with 79964b6b50c2450f26864b8a8dd90d1d647b761bd2042266a4102c4e345c8044 not found: ID does not exist" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.255769 4770 scope.go:117] "RemoveContainer" containerID="849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e" Jan 26 19:17:03 crc kubenswrapper[4770]: E0126 19:17:03.256218 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e\": container with ID starting with 849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e not found: ID does not exist" containerID="849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.256252 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e"} err="failed to get container status \"849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e\": rpc error: code = NotFound desc = could not find container \"849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e\": container with ID starting with 849e61f2303d2813b69f69fad03232915ae63654183e4a05a7d7f1615502387e not found: ID does not exist" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.256285 4770 scope.go:117] "RemoveContainer" containerID="9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad" Jan 26 19:17:03 crc kubenswrapper[4770]: E0126 19:17:03.256687 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad\": container with ID starting with 9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad not found: ID does not exist" containerID="9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.256739 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad"} err="failed to get container status \"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad\": rpc error: code = NotFound desc = could not find container \"9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad\": container with ID starting with 9121b86c1b0cd6c62efd9c443efa116829d51bf1da7c0a6f71f0b279edfc08ad not found: ID does not exist" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.584347 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.672685 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.672841 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.672872 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77mct\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.672950 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673014 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673047 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673123 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673193 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673216 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673260 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673294 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673332 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.673366 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle\") pod \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\" (UID: \"514407e1-deb8-4ac4-bf0e-9b93842cb8f9\") " Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.684929 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.685136 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.689125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.690166 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.694964 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct" (OuterVolumeSpecName: "kube-api-access-77mct") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "kube-api-access-77mct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.696414 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.704896 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.705067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.708953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.708961 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.715750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.750879 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776733 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776762 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776772 4770 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776782 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776791 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776800 4770 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776809 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776818 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776827 4770 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776835 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776843 4770 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.776851 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77mct\" (UniqueName: \"kubernetes.io/projected/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-kube-api-access-77mct\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.794450 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" path="/var/lib/kubelet/pods/2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d/volumes" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.794849 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory" (OuterVolumeSpecName: "inventory") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.795274 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" path="/var/lib/kubelet/pods/8d8ca055-fd79-41e5-a48e-cebd6bdae263/volumes" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.803740 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "514407e1-deb8-4ac4-bf0e-9b93842cb8f9" (UID: "514407e1-deb8-4ac4-bf0e-9b93842cb8f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.879142 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:03 crc kubenswrapper[4770]: I0126 19:17:03.879176 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/514407e1-deb8-4ac4-bf0e-9b93842cb8f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.147131 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" event={"ID":"514407e1-deb8-4ac4-bf0e-9b93842cb8f9","Type":"ContainerDied","Data":"0c9abecdc96e75e9a55f93046e70a5613b39c12b4ef7b88a32c31dddf4a37cdd"} Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.147454 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c9abecdc96e75e9a55f93046e70a5613b39c12b4ef7b88a32c31dddf4a37cdd" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.147193 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.252242 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6"] Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.252861 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="extract-utilities" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.252896 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="extract-utilities" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.252924 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="extract-content" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.252935 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="extract-content" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.252970 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.252979 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.253000 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514407e1-deb8-4ac4-bf0e-9b93842cb8f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253010 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="514407e1-deb8-4ac4-bf0e-9b93842cb8f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.253021 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253039 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.253065 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="extract-content" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253074 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="extract-content" Jan 26 19:17:04 crc kubenswrapper[4770]: E0126 19:17:04.253098 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="extract-utilities" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253106 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="extract-utilities" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253346 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="514407e1-deb8-4ac4-bf0e-9b93842cb8f9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253371 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8ca055-fd79-41e5-a48e-cebd6bdae263" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.253404 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9b650c-c7e2-4bdc-a663-db6baa7f7a2d" containerName="registry-server" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.254301 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.258290 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.258678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.258982 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.259161 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.259224 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.268216 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6"] Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.287377 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.287417 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.287470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpcf5\" (UniqueName: \"kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.287523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.287554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.389366 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.389465 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.389487 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.389536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpcf5\" (UniqueName: \"kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.389592 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.392058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.394737 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.394744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.398788 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.415227 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpcf5\" (UniqueName: \"kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-hk2v6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:04 crc kubenswrapper[4770]: I0126 19:17:04.587310 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:17:05 crc kubenswrapper[4770]: I0126 19:17:05.147095 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6"] Jan 26 19:17:05 crc kubenswrapper[4770]: I0126 19:17:05.162153 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" event={"ID":"483f1a9a-7983-4628-bc2e-ab37a776dcf6","Type":"ContainerStarted","Data":"24b5c7bd4e13d8f79ce24d6c40a69408bc8e08ac0491bec7f7417eb176900619"} Jan 26 19:17:05 crc kubenswrapper[4770]: I0126 19:17:05.453931 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:17:05 crc kubenswrapper[4770]: I0126 19:17:05.503933 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:17:06 crc kubenswrapper[4770]: I0126 19:17:06.174161 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" event={"ID":"483f1a9a-7983-4628-bc2e-ab37a776dcf6","Type":"ContainerStarted","Data":"a585e1a31f5c3c1b037e7944f02416e2df7b4e5354c093f795c32c0f782b6fd1"} Jan 26 19:17:06 crc kubenswrapper[4770]: I0126 19:17:06.193250 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" podStartSLOduration=1.689826255 podStartE2EDuration="2.193232743s" podCreationTimestamp="2026-01-26 19:17:04 +0000 UTC" firstStartedPulling="2026-01-26 19:17:05.153484824 +0000 UTC m=+2109.718391566" lastFinishedPulling="2026-01-26 19:17:05.656891302 +0000 UTC m=+2110.221798054" observedRunningTime="2026-01-26 19:17:06.189820908 +0000 UTC m=+2110.754727660" watchObservedRunningTime="2026-01-26 19:17:06.193232743 +0000 UTC m=+2110.758139485" Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.049135 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.184573 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gtkww" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="registry-server" containerID="cri-o://f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b" gracePeriod=2 Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.677847 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.859997 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfsll\" (UniqueName: \"kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll\") pod \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.860130 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content\") pod \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.860399 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities\") pod \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\" (UID: \"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa\") " Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.861777 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities" (OuterVolumeSpecName: "utilities") pod "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" (UID: "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.869526 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll" (OuterVolumeSpecName: "kube-api-access-lfsll") pod "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" (UID: "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa"). InnerVolumeSpecName "kube-api-access-lfsll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.965043 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:07 crc kubenswrapper[4770]: I0126 19:17:07.965075 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfsll\" (UniqueName: \"kubernetes.io/projected/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-kube-api-access-lfsll\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.062928 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" (UID: "ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.067072 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.193501 4770 generic.go:334] "Generic (PLEG): container finished" podID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerID="f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b" exitCode=0 Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.193542 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerDied","Data":"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b"} Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.193569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gtkww" event={"ID":"ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa","Type":"ContainerDied","Data":"01e5d244b47e8e3a8a1b2df237408e7a65b44eedd3d0fda774ee797f11a98ebf"} Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.193584 4770 scope.go:117] "RemoveContainer" containerID="f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.193720 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gtkww" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.210777 4770 scope.go:117] "RemoveContainer" containerID="045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.226323 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.234235 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gtkww"] Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.248439 4770 scope.go:117] "RemoveContainer" containerID="498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.294151 4770 scope.go:117] "RemoveContainer" containerID="f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b" Jan 26 19:17:08 crc kubenswrapper[4770]: E0126 19:17:08.294673 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b\": container with ID starting with f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b not found: ID does not exist" containerID="f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.294722 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b"} err="failed to get container status \"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b\": rpc error: code = NotFound desc = could not find container \"f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b\": container with ID starting with f3f4e3d0d08b71561fe76009c59224e8831f10862cbf17dd3c3d9a1b786ba64b not found: ID does not exist" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.294745 4770 scope.go:117] "RemoveContainer" containerID="045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9" Jan 26 19:17:08 crc kubenswrapper[4770]: E0126 19:17:08.296034 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9\": container with ID starting with 045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9 not found: ID does not exist" containerID="045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.296066 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9"} err="failed to get container status \"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9\": rpc error: code = NotFound desc = could not find container \"045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9\": container with ID starting with 045ceb464615cfda1701afbe0485d27c23b82eda7f644e9ce95e5589230f98a9 not found: ID does not exist" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.296086 4770 scope.go:117] "RemoveContainer" containerID="498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa" Jan 26 19:17:08 crc kubenswrapper[4770]: E0126 19:17:08.296758 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa\": container with ID starting with 498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa not found: ID does not exist" containerID="498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa" Jan 26 19:17:08 crc kubenswrapper[4770]: I0126 19:17:08.296824 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa"} err="failed to get container status \"498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa\": rpc error: code = NotFound desc = could not find container \"498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa\": container with ID starting with 498565a5428321f9688e6deafda47a2bd730806729a8c5cae73829316b7fe5fa not found: ID does not exist" Jan 26 19:17:09 crc kubenswrapper[4770]: I0126 19:17:09.783605 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" path="/var/lib/kubelet/pods/ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa/volumes" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.913564 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:09 crc kubenswrapper[4770]: E0126 19:18:09.914858 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="registry-server" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.914883 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="registry-server" Jan 26 19:18:09 crc kubenswrapper[4770]: E0126 19:18:09.914905 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="extract-content" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.914916 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="extract-content" Jan 26 19:18:09 crc kubenswrapper[4770]: E0126 19:18:09.914948 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="extract-utilities" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.914961 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="extract-utilities" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.915341 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab37f7a1-782b-4dc5-91b3-d4caaae6a9fa" containerName="registry-server" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.917662 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:09 crc kubenswrapper[4770]: I0126 19:18:09.924329 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.020455 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgsw\" (UniqueName: \"kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.021276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.021428 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.123661 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.123755 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.123886 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgsw\" (UniqueName: \"kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.124262 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.124275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.162612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgsw\" (UniqueName: \"kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw\") pod \"community-operators-rpltg\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.252667 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:10 crc kubenswrapper[4770]: I0126 19:18:10.825283 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:11 crc kubenswrapper[4770]: I0126 19:18:11.829292 4770 generic.go:334] "Generic (PLEG): container finished" podID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerID="8ac400813a20d7ba2b210cc04fb085b4728f0b28423605a1e0c47bcc9a4e2418" exitCode=0 Jan 26 19:18:11 crc kubenswrapper[4770]: I0126 19:18:11.829354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerDied","Data":"8ac400813a20d7ba2b210cc04fb085b4728f0b28423605a1e0c47bcc9a4e2418"} Jan 26 19:18:11 crc kubenswrapper[4770]: I0126 19:18:11.829993 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerStarted","Data":"dbef778cb93242104883ada8f3d28779cc9cfd4955a2112b754682e5e0fe64e2"} Jan 26 19:18:13 crc kubenswrapper[4770]: I0126 19:18:13.862087 4770 generic.go:334] "Generic (PLEG): container finished" podID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerID="f308ee83fd262267b7d7d3cf7035c9ad6f6fd3366413a01becac03ccad7b1534" exitCode=0 Jan 26 19:18:13 crc kubenswrapper[4770]: I0126 19:18:13.862154 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerDied","Data":"f308ee83fd262267b7d7d3cf7035c9ad6f6fd3366413a01becac03ccad7b1534"} Jan 26 19:18:14 crc kubenswrapper[4770]: I0126 19:18:14.875431 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerStarted","Data":"3e12426e06a66d7ee358dc2d57e5328537379233bb9ee48e4cae0056751a4cfa"} Jan 26 19:18:14 crc kubenswrapper[4770]: I0126 19:18:14.901656 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpltg" podStartSLOduration=3.463112801 podStartE2EDuration="5.9016316s" podCreationTimestamp="2026-01-26 19:18:09 +0000 UTC" firstStartedPulling="2026-01-26 19:18:11.833209942 +0000 UTC m=+2176.398116684" lastFinishedPulling="2026-01-26 19:18:14.271728711 +0000 UTC m=+2178.836635483" observedRunningTime="2026-01-26 19:18:14.891908724 +0000 UTC m=+2179.456815496" watchObservedRunningTime="2026-01-26 19:18:14.9016316 +0000 UTC m=+2179.466538362" Jan 26 19:18:20 crc kubenswrapper[4770]: I0126 19:18:20.253708 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:20 crc kubenswrapper[4770]: I0126 19:18:20.254408 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:20 crc kubenswrapper[4770]: I0126 19:18:20.341144 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:21 crc kubenswrapper[4770]: I0126 19:18:21.010221 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:21 crc kubenswrapper[4770]: I0126 19:18:21.069267 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:22 crc kubenswrapper[4770]: I0126 19:18:22.983192 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpltg" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="registry-server" containerID="cri-o://3e12426e06a66d7ee358dc2d57e5328537379233bb9ee48e4cae0056751a4cfa" gracePeriod=2 Jan 26 19:18:23 crc kubenswrapper[4770]: I0126 19:18:23.992497 4770 generic.go:334] "Generic (PLEG): container finished" podID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerID="3e12426e06a66d7ee358dc2d57e5328537379233bb9ee48e4cae0056751a4cfa" exitCode=0 Jan 26 19:18:23 crc kubenswrapper[4770]: I0126 19:18:23.992588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerDied","Data":"3e12426e06a66d7ee358dc2d57e5328537379233bb9ee48e4cae0056751a4cfa"} Jan 26 19:18:23 crc kubenswrapper[4770]: I0126 19:18:23.993169 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpltg" event={"ID":"d20036ed-341f-4e3b-9eac-acb09d66b580","Type":"ContainerDied","Data":"dbef778cb93242104883ada8f3d28779cc9cfd4955a2112b754682e5e0fe64e2"} Jan 26 19:18:23 crc kubenswrapper[4770]: I0126 19:18:23.993188 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbef778cb93242104883ada8f3d28779cc9cfd4955a2112b754682e5e0fe64e2" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.006869 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.087960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities\") pod \"d20036ed-341f-4e3b-9eac-acb09d66b580\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.088087 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content\") pod \"d20036ed-341f-4e3b-9eac-acb09d66b580\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.088289 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgsw\" (UniqueName: \"kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw\") pod \"d20036ed-341f-4e3b-9eac-acb09d66b580\" (UID: \"d20036ed-341f-4e3b-9eac-acb09d66b580\") " Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.089045 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities" (OuterVolumeSpecName: "utilities") pod "d20036ed-341f-4e3b-9eac-acb09d66b580" (UID: "d20036ed-341f-4e3b-9eac-acb09d66b580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.094908 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw" (OuterVolumeSpecName: "kube-api-access-rfgsw") pod "d20036ed-341f-4e3b-9eac-acb09d66b580" (UID: "d20036ed-341f-4e3b-9eac-acb09d66b580"). InnerVolumeSpecName "kube-api-access-rfgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.143308 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d20036ed-341f-4e3b-9eac-acb09d66b580" (UID: "d20036ed-341f-4e3b-9eac-acb09d66b580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.190885 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.191197 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgsw\" (UniqueName: \"kubernetes.io/projected/d20036ed-341f-4e3b-9eac-acb09d66b580-kube-api-access-rfgsw\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:24 crc kubenswrapper[4770]: I0126 19:18:24.191315 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20036ed-341f-4e3b-9eac-acb09d66b580-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.005931 4770 generic.go:334] "Generic (PLEG): container finished" podID="483f1a9a-7983-4628-bc2e-ab37a776dcf6" containerID="a585e1a31f5c3c1b037e7944f02416e2df7b4e5354c093f795c32c0f782b6fd1" exitCode=0 Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.006030 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpltg" Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.006034 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" event={"ID":"483f1a9a-7983-4628-bc2e-ab37a776dcf6","Type":"ContainerDied","Data":"a585e1a31f5c3c1b037e7944f02416e2df7b4e5354c093f795c32c0f782b6fd1"} Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.052452 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.059952 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpltg"] Jan 26 19:18:25 crc kubenswrapper[4770]: I0126 19:18:25.784617 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" path="/var/lib/kubelet/pods/d20036ed-341f-4e3b-9eac-acb09d66b580/volumes" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.431869 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.538515 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam\") pod \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.538592 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0\") pod \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.538622 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory\") pod \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.538777 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle\") pod \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.538896 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpcf5\" (UniqueName: \"kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5\") pod \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\" (UID: \"483f1a9a-7983-4628-bc2e-ab37a776dcf6\") " Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.545687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "483f1a9a-7983-4628-bc2e-ab37a776dcf6" (UID: "483f1a9a-7983-4628-bc2e-ab37a776dcf6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.550740 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5" (OuterVolumeSpecName: "kube-api-access-fpcf5") pod "483f1a9a-7983-4628-bc2e-ab37a776dcf6" (UID: "483f1a9a-7983-4628-bc2e-ab37a776dcf6"). InnerVolumeSpecName "kube-api-access-fpcf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.574935 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "483f1a9a-7983-4628-bc2e-ab37a776dcf6" (UID: "483f1a9a-7983-4628-bc2e-ab37a776dcf6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.575095 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "483f1a9a-7983-4628-bc2e-ab37a776dcf6" (UID: "483f1a9a-7983-4628-bc2e-ab37a776dcf6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.580143 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory" (OuterVolumeSpecName: "inventory") pod "483f1a9a-7983-4628-bc2e-ab37a776dcf6" (UID: "483f1a9a-7983-4628-bc2e-ab37a776dcf6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.641010 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.641037 4770 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.641046 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.641055 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483f1a9a-7983-4628-bc2e-ab37a776dcf6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:26 crc kubenswrapper[4770]: I0126 19:18:26.641063 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpcf5\" (UniqueName: \"kubernetes.io/projected/483f1a9a-7983-4628-bc2e-ab37a776dcf6-kube-api-access-fpcf5\") on node \"crc\" DevicePath \"\"" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.024727 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" event={"ID":"483f1a9a-7983-4628-bc2e-ab37a776dcf6","Type":"ContainerDied","Data":"24b5c7bd4e13d8f79ce24d6c40a69408bc8e08ac0491bec7f7417eb176900619"} Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.024765 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b5c7bd4e13d8f79ce24d6c40a69408bc8e08ac0491bec7f7417eb176900619" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.024807 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-hk2v6" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.139097 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg"] Jan 26 19:18:27 crc kubenswrapper[4770]: E0126 19:18:27.139541 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="extract-utilities" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.139564 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="extract-utilities" Jan 26 19:18:27 crc kubenswrapper[4770]: E0126 19:18:27.139586 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="registry-server" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.139592 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="registry-server" Jan 26 19:18:27 crc kubenswrapper[4770]: E0126 19:18:27.139604 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="extract-content" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.139610 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="extract-content" Jan 26 19:18:27 crc kubenswrapper[4770]: E0126 19:18:27.139623 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483f1a9a-7983-4628-bc2e-ab37a776dcf6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.139629 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="483f1a9a-7983-4628-bc2e-ab37a776dcf6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.140302 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20036ed-341f-4e3b-9eac-acb09d66b580" containerName="registry-server" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.140338 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="483f1a9a-7983-4628-bc2e-ab37a776dcf6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.141004 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.144285 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.144307 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.144533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.144741 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.144884 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.145030 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.154103 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg"] Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252370 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252428 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252554 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.252636 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jccr8\" (UniqueName: \"kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.353907 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jccr8\" (UniqueName: \"kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.354188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.354312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.354421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.354498 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.355062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.359305 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.359612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.360691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.360815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.361278 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.387596 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jccr8\" (UniqueName: \"kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:27 crc kubenswrapper[4770]: I0126 19:18:27.458604 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:18:28 crc kubenswrapper[4770]: W0126 19:18:28.067432 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c761917_b83c_4c4b_8aff_79848506a7cd.slice/crio-402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764 WatchSource:0}: Error finding container 402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764: Status 404 returned error can't find the container with id 402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764 Jan 26 19:18:28 crc kubenswrapper[4770]: I0126 19:18:28.071650 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg"] Jan 26 19:18:29 crc kubenswrapper[4770]: I0126 19:18:29.048548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" event={"ID":"5c761917-b83c-4c4b-8aff-79848506a7cd","Type":"ContainerStarted","Data":"2e5ca6e56114c979f159f42ff5308b5369aa5fae4c861fb68d6a18451e4fc167"} Jan 26 19:18:29 crc kubenswrapper[4770]: I0126 19:18:29.049225 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" event={"ID":"5c761917-b83c-4c4b-8aff-79848506a7cd","Type":"ContainerStarted","Data":"402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764"} Jan 26 19:18:29 crc kubenswrapper[4770]: I0126 19:18:29.071840 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" podStartSLOduration=1.657540368 podStartE2EDuration="2.071811446s" podCreationTimestamp="2026-01-26 19:18:27 +0000 UTC" firstStartedPulling="2026-01-26 19:18:28.071897729 +0000 UTC m=+2192.636804471" lastFinishedPulling="2026-01-26 19:18:28.486168807 +0000 UTC m=+2193.051075549" observedRunningTime="2026-01-26 19:18:29.068204687 +0000 UTC m=+2193.633111439" watchObservedRunningTime="2026-01-26 19:18:29.071811446 +0000 UTC m=+2193.636718198" Jan 26 19:18:30 crc kubenswrapper[4770]: I0126 19:18:30.331369 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:18:30 crc kubenswrapper[4770]: I0126 19:18:30.331436 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:00 crc kubenswrapper[4770]: I0126 19:19:00.331352 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:19:00 crc kubenswrapper[4770]: I0126 19:19:00.332011 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:27 crc kubenswrapper[4770]: I0126 19:19:27.663215 4770 generic.go:334] "Generic (PLEG): container finished" podID="5c761917-b83c-4c4b-8aff-79848506a7cd" containerID="2e5ca6e56114c979f159f42ff5308b5369aa5fae4c861fb68d6a18451e4fc167" exitCode=0 Jan 26 19:19:27 crc kubenswrapper[4770]: I0126 19:19:27.663322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" event={"ID":"5c761917-b83c-4c4b-8aff-79848506a7cd","Type":"ContainerDied","Data":"2e5ca6e56114c979f159f42ff5308b5369aa5fae4c861fb68d6a18451e4fc167"} Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.116229 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.258341 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.259429 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.259495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.259597 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.259682 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.259795 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jccr8\" (UniqueName: \"kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8\") pod \"5c761917-b83c-4c4b-8aff-79848506a7cd\" (UID: \"5c761917-b83c-4c4b-8aff-79848506a7cd\") " Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.269919 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.273002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8" (OuterVolumeSpecName: "kube-api-access-jccr8") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "kube-api-access-jccr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.292384 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.293460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory" (OuterVolumeSpecName: "inventory") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.301353 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.318014 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c761917-b83c-4c4b-8aff-79848506a7cd" (UID: "5c761917-b83c-4c4b-8aff-79848506a7cd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.364984 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.365089 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.365130 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jccr8\" (UniqueName: \"kubernetes.io/projected/5c761917-b83c-4c4b-8aff-79848506a7cd-kube-api-access-jccr8\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.365166 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.365203 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.365231 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c761917-b83c-4c4b-8aff-79848506a7cd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.693462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" event={"ID":"5c761917-b83c-4c4b-8aff-79848506a7cd","Type":"ContainerDied","Data":"402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764"} Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.693850 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402c6028149edd9bef3a86fa315ad9de91c2d3af2d4f2653843c7b9e38a60764" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.693550 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.823049 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp"] Jan 26 19:19:29 crc kubenswrapper[4770]: E0126 19:19:29.823684 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c761917-b83c-4c4b-8aff-79848506a7cd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.823735 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c761917-b83c-4c4b-8aff-79848506a7cd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.824060 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c761917-b83c-4c4b-8aff-79848506a7cd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.825333 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.832325 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.832343 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.834356 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.834609 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.838963 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.841117 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp"] Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.979504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.979577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.979600 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.980553 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:29 crc kubenswrapper[4770]: I0126 19:19:29.980770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwln5\" (UniqueName: \"kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.082975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.083060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwln5\" (UniqueName: \"kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.083122 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.083147 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.083167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.089042 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.089297 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.090142 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.090164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.117218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwln5\" (UniqueName: \"kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.151252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.330529 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.330951 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.331013 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.331999 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.332091 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" gracePeriod=600 Jan 26 19:19:30 crc kubenswrapper[4770]: E0126 19:19:30.461035 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.704475 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" exitCode=0 Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.704527 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b"} Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.704567 4770 scope.go:117] "RemoveContainer" containerID="bcd6cbdcbb54366ae41277c5e0ca70660323878aa6ec238cecc096b0604b1641" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.705340 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:19:30 crc kubenswrapper[4770]: E0126 19:19:30.705687 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:19:30 crc kubenswrapper[4770]: I0126 19:19:30.741131 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp"] Jan 26 19:19:31 crc kubenswrapper[4770]: I0126 19:19:31.722312 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" event={"ID":"372fe502-3240-4adc-b60d-ae93c8a37430","Type":"ContainerStarted","Data":"973efd98cc19abc440b4d36af34001368097af0f74c1de8c4a32753c5f87b78f"} Jan 26 19:19:31 crc kubenswrapper[4770]: I0126 19:19:31.722866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" event={"ID":"372fe502-3240-4adc-b60d-ae93c8a37430","Type":"ContainerStarted","Data":"be46c0b0f904c4ce21de3d542fa0903941d8d5d5427a1f3135cf18fcae35cfe8"} Jan 26 19:19:41 crc kubenswrapper[4770]: I0126 19:19:41.767185 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:19:41 crc kubenswrapper[4770]: E0126 19:19:41.767985 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:19:56 crc kubenswrapper[4770]: I0126 19:19:56.767045 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:19:56 crc kubenswrapper[4770]: E0126 19:19:56.767962 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:20:11 crc kubenswrapper[4770]: I0126 19:20:11.767161 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:20:11 crc kubenswrapper[4770]: E0126 19:20:11.768471 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:20:23 crc kubenswrapper[4770]: I0126 19:20:23.767667 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:20:23 crc kubenswrapper[4770]: E0126 19:20:23.768872 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:20:34 crc kubenswrapper[4770]: I0126 19:20:34.767834 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:20:34 crc kubenswrapper[4770]: E0126 19:20:34.768501 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:20:47 crc kubenswrapper[4770]: I0126 19:20:47.768412 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:20:47 crc kubenswrapper[4770]: E0126 19:20:47.769685 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:21:01 crc kubenswrapper[4770]: I0126 19:21:01.768054 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:21:01 crc kubenswrapper[4770]: E0126 19:21:01.769353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:21:14 crc kubenswrapper[4770]: I0126 19:21:14.767809 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:21:14 crc kubenswrapper[4770]: E0126 19:21:14.768958 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:21:27 crc kubenswrapper[4770]: I0126 19:21:27.767833 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:21:27 crc kubenswrapper[4770]: E0126 19:21:27.768661 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:21:39 crc kubenswrapper[4770]: I0126 19:21:39.768408 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:21:39 crc kubenswrapper[4770]: E0126 19:21:39.769384 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:21:52 crc kubenswrapper[4770]: I0126 19:21:52.768527 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:21:52 crc kubenswrapper[4770]: E0126 19:21:52.769515 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:22:04 crc kubenswrapper[4770]: I0126 19:22:04.768225 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:22:04 crc kubenswrapper[4770]: E0126 19:22:04.769222 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:22:18 crc kubenswrapper[4770]: I0126 19:22:18.767639 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:22:18 crc kubenswrapper[4770]: E0126 19:22:18.768418 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:22:22 crc kubenswrapper[4770]: I0126 19:22:22.496414 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8688c56555-rsnrn" podUID="65d3af51-41f4-40e5-949e-a3eb611043bb" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 19:22:33 crc kubenswrapper[4770]: I0126 19:22:33.768092 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:22:33 crc kubenswrapper[4770]: E0126 19:22:33.770019 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:22:45 crc kubenswrapper[4770]: I0126 19:22:45.778891 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:22:45 crc kubenswrapper[4770]: E0126 19:22:45.786879 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:23:00 crc kubenswrapper[4770]: I0126 19:23:00.766899 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:23:00 crc kubenswrapper[4770]: E0126 19:23:00.767964 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:23:12 crc kubenswrapper[4770]: I0126 19:23:12.768601 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:23:12 crc kubenswrapper[4770]: E0126 19:23:12.769799 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:23:24 crc kubenswrapper[4770]: I0126 19:23:24.768489 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:23:24 crc kubenswrapper[4770]: E0126 19:23:24.769202 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:23:39 crc kubenswrapper[4770]: I0126 19:23:39.768160 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:23:39 crc kubenswrapper[4770]: E0126 19:23:39.769472 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:23:50 crc kubenswrapper[4770]: I0126 19:23:50.768404 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:23:50 crc kubenswrapper[4770]: E0126 19:23:50.769789 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:24:05 crc kubenswrapper[4770]: I0126 19:24:05.780298 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:24:05 crc kubenswrapper[4770]: E0126 19:24:05.783240 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:24:17 crc kubenswrapper[4770]: I0126 19:24:17.767248 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:24:17 crc kubenswrapper[4770]: E0126 19:24:17.768902 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:24:20 crc kubenswrapper[4770]: I0126 19:24:20.155120 4770 scope.go:117] "RemoveContainer" containerID="3e12426e06a66d7ee358dc2d57e5328537379233bb9ee48e4cae0056751a4cfa" Jan 26 19:24:20 crc kubenswrapper[4770]: I0126 19:24:20.198568 4770 scope.go:117] "RemoveContainer" containerID="f308ee83fd262267b7d7d3cf7035c9ad6f6fd3366413a01becac03ccad7b1534" Jan 26 19:24:20 crc kubenswrapper[4770]: I0126 19:24:20.237737 4770 scope.go:117] "RemoveContainer" containerID="8ac400813a20d7ba2b210cc04fb085b4728f0b28423605a1e0c47bcc9a4e2418" Jan 26 19:24:31 crc kubenswrapper[4770]: I0126 19:24:31.768099 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:24:32 crc kubenswrapper[4770]: I0126 19:24:32.362433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419"} Jan 26 19:24:32 crc kubenswrapper[4770]: I0126 19:24:32.390192 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" podStartSLOduration=302.817951727 podStartE2EDuration="5m3.390165758s" podCreationTimestamp="2026-01-26 19:19:29 +0000 UTC" firstStartedPulling="2026-01-26 19:19:30.796291594 +0000 UTC m=+2255.361198336" lastFinishedPulling="2026-01-26 19:19:31.368505595 +0000 UTC m=+2255.933412367" observedRunningTime="2026-01-26 19:19:31.749980585 +0000 UTC m=+2256.314887317" watchObservedRunningTime="2026-01-26 19:24:32.390165758 +0000 UTC m=+2556.955072490" Jan 26 19:24:41 crc kubenswrapper[4770]: I0126 19:24:41.468943 4770 generic.go:334] "Generic (PLEG): container finished" podID="372fe502-3240-4adc-b60d-ae93c8a37430" containerID="973efd98cc19abc440b4d36af34001368097af0f74c1de8c4a32753c5f87b78f" exitCode=0 Jan 26 19:24:41 crc kubenswrapper[4770]: I0126 19:24:41.469013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" event={"ID":"372fe502-3240-4adc-b60d-ae93c8a37430","Type":"ContainerDied","Data":"973efd98cc19abc440b4d36af34001368097af0f74c1de8c4a32753c5f87b78f"} Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.024844 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.134239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle\") pod \"372fe502-3240-4adc-b60d-ae93c8a37430\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.134340 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory\") pod \"372fe502-3240-4adc-b60d-ae93c8a37430\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.134495 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwln5\" (UniqueName: \"kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5\") pod \"372fe502-3240-4adc-b60d-ae93c8a37430\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.134557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0\") pod \"372fe502-3240-4adc-b60d-ae93c8a37430\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.134905 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam\") pod \"372fe502-3240-4adc-b60d-ae93c8a37430\" (UID: \"372fe502-3240-4adc-b60d-ae93c8a37430\") " Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.142944 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "372fe502-3240-4adc-b60d-ae93c8a37430" (UID: "372fe502-3240-4adc-b60d-ae93c8a37430"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.142983 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5" (OuterVolumeSpecName: "kube-api-access-fwln5") pod "372fe502-3240-4adc-b60d-ae93c8a37430" (UID: "372fe502-3240-4adc-b60d-ae93c8a37430"). InnerVolumeSpecName "kube-api-access-fwln5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.170205 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "372fe502-3240-4adc-b60d-ae93c8a37430" (UID: "372fe502-3240-4adc-b60d-ae93c8a37430"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.173742 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory" (OuterVolumeSpecName: "inventory") pod "372fe502-3240-4adc-b60d-ae93c8a37430" (UID: "372fe502-3240-4adc-b60d-ae93c8a37430"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.196801 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "372fe502-3240-4adc-b60d-ae93c8a37430" (UID: "372fe502-3240-4adc-b60d-ae93c8a37430"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.239562 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwln5\" (UniqueName: \"kubernetes.io/projected/372fe502-3240-4adc-b60d-ae93c8a37430-kube-api-access-fwln5\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.239632 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.239650 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.239667 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.239685 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/372fe502-3240-4adc-b60d-ae93c8a37430-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.497548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" event={"ID":"372fe502-3240-4adc-b60d-ae93c8a37430","Type":"ContainerDied","Data":"be46c0b0f904c4ce21de3d542fa0903941d8d5d5427a1f3135cf18fcae35cfe8"} Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.497608 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be46c0b0f904c4ce21de3d542fa0903941d8d5d5427a1f3135cf18fcae35cfe8" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.497945 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.633070 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt"] Jan 26 19:24:43 crc kubenswrapper[4770]: E0126 19:24:43.633514 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372fe502-3240-4adc-b60d-ae93c8a37430" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.633550 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="372fe502-3240-4adc-b60d-ae93c8a37430" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.633829 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="372fe502-3240-4adc-b60d-ae93c8a37430" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.634616 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.645984 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.646110 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.646146 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.648480 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.648510 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.648847 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.648956 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.650219 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt"] Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.749558 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.749659 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.749830 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.749914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn87h\" (UniqueName: \"kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.750214 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.750420 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.750492 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.750556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.750631 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.853723 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.853871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.853965 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn87h\" (UniqueName: \"kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854574 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854641 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.854904 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.856663 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.858391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.858996 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.859108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.859366 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.860914 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.861271 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.861654 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.874919 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn87h\" (UniqueName: \"kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9qgt\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:43 crc kubenswrapper[4770]: I0126 19:24:43.971426 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:24:44 crc kubenswrapper[4770]: I0126 19:24:44.370718 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt"] Jan 26 19:24:44 crc kubenswrapper[4770]: I0126 19:24:44.384028 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:24:44 crc kubenswrapper[4770]: I0126 19:24:44.507862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" event={"ID":"c54172aa-4886-49b2-8834-ea8e8c57306e","Type":"ContainerStarted","Data":"7f036e3edea5e5cf5011535867ae7aacc1d59485f4d62d62f8407ee76c7611bd"} Jan 26 19:24:45 crc kubenswrapper[4770]: I0126 19:24:45.529773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" event={"ID":"c54172aa-4886-49b2-8834-ea8e8c57306e","Type":"ContainerStarted","Data":"c876e2a8dd5af905fdb85cb169361339c6e4f445fb9435ebdadc2bb6a4ec81ec"} Jan 26 19:24:45 crc kubenswrapper[4770]: I0126 19:24:45.564567 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" podStartSLOduration=2.142851402 podStartE2EDuration="2.564544096s" podCreationTimestamp="2026-01-26 19:24:43 +0000 UTC" firstStartedPulling="2026-01-26 19:24:44.383679166 +0000 UTC m=+2568.948585908" lastFinishedPulling="2026-01-26 19:24:44.80537185 +0000 UTC m=+2569.370278602" observedRunningTime="2026-01-26 19:24:45.559368244 +0000 UTC m=+2570.124275016" watchObservedRunningTime="2026-01-26 19:24:45.564544096 +0000 UTC m=+2570.129450868" Jan 26 19:25:13 crc kubenswrapper[4770]: I0126 19:25:13.607139 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5c5fff9c7-vsc8j" podUID="061a1ade-3e2c-4fa3-af1d-79119e42b777" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.376571 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.378866 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.390366 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.471271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.471545 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2wz\" (UniqueName: \"kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.471730 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.573531 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.573628 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.573650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2wz\" (UniqueName: \"kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.574392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.574614 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.594833 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2wz\" (UniqueName: \"kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz\") pod \"redhat-operators-thv2f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:26:59 crc kubenswrapper[4770]: I0126 19:26:59.737755 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:00 crc kubenswrapper[4770]: I0126 19:27:00.237000 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:27:00 crc kubenswrapper[4770]: I0126 19:27:00.331189 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:27:00 crc kubenswrapper[4770]: I0126 19:27:00.331259 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:27:01 crc kubenswrapper[4770]: I0126 19:27:01.014693 4770 generic.go:334] "Generic (PLEG): container finished" podID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerID="b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26" exitCode=0 Jan 26 19:27:01 crc kubenswrapper[4770]: I0126 19:27:01.015041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerDied","Data":"b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26"} Jan 26 19:27:01 crc kubenswrapper[4770]: I0126 19:27:01.015073 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerStarted","Data":"5bbd2267774a4ea2fed19cb136eb4e3882091da50e64824985e02f9ef4e5ea04"} Jan 26 19:27:03 crc kubenswrapper[4770]: I0126 19:27:03.033650 4770 generic.go:334] "Generic (PLEG): container finished" podID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerID="6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89" exitCode=0 Jan 26 19:27:03 crc kubenswrapper[4770]: I0126 19:27:03.033735 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerDied","Data":"6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89"} Jan 26 19:27:05 crc kubenswrapper[4770]: I0126 19:27:05.056790 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerStarted","Data":"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a"} Jan 26 19:27:05 crc kubenswrapper[4770]: I0126 19:27:05.078767 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thv2f" podStartSLOduration=2.622714916 podStartE2EDuration="6.078749778s" podCreationTimestamp="2026-01-26 19:26:59 +0000 UTC" firstStartedPulling="2026-01-26 19:27:01.018084787 +0000 UTC m=+2705.582991529" lastFinishedPulling="2026-01-26 19:27:04.474119659 +0000 UTC m=+2709.039026391" observedRunningTime="2026-01-26 19:27:05.072366273 +0000 UTC m=+2709.637273025" watchObservedRunningTime="2026-01-26 19:27:05.078749778 +0000 UTC m=+2709.643656510" Jan 26 19:27:09 crc kubenswrapper[4770]: I0126 19:27:09.738323 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:09 crc kubenswrapper[4770]: I0126 19:27:09.739117 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:10 crc kubenswrapper[4770]: I0126 19:27:10.791931 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-thv2f" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="registry-server" probeResult="failure" output=< Jan 26 19:27:10 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 19:27:10 crc kubenswrapper[4770]: > Jan 26 19:27:19 crc kubenswrapper[4770]: I0126 19:27:19.804254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:19 crc kubenswrapper[4770]: I0126 19:27:19.858484 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:20 crc kubenswrapper[4770]: I0126 19:27:20.063934 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.213455 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thv2f" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="registry-server" containerID="cri-o://956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a" gracePeriod=2 Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.675349 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.766993 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content\") pod \"e831648b-6e82-4059-ac8a-7fea8a98912f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.767632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities\") pod \"e831648b-6e82-4059-ac8a-7fea8a98912f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.768160 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl2wz\" (UniqueName: \"kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz\") pod \"e831648b-6e82-4059-ac8a-7fea8a98912f\" (UID: \"e831648b-6e82-4059-ac8a-7fea8a98912f\") " Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.769007 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities" (OuterVolumeSpecName: "utilities") pod "e831648b-6e82-4059-ac8a-7fea8a98912f" (UID: "e831648b-6e82-4059-ac8a-7fea8a98912f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.770992 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.779269 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz" (OuterVolumeSpecName: "kube-api-access-pl2wz") pod "e831648b-6e82-4059-ac8a-7fea8a98912f" (UID: "e831648b-6e82-4059-ac8a-7fea8a98912f"). InnerVolumeSpecName "kube-api-access-pl2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.873200 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e831648b-6e82-4059-ac8a-7fea8a98912f" (UID: "e831648b-6e82-4059-ac8a-7fea8a98912f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.874403 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl2wz\" (UniqueName: \"kubernetes.io/projected/e831648b-6e82-4059-ac8a-7fea8a98912f-kube-api-access-pl2wz\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:21 crc kubenswrapper[4770]: I0126 19:27:21.874638 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e831648b-6e82-4059-ac8a-7fea8a98912f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.224189 4770 generic.go:334] "Generic (PLEG): container finished" podID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerID="956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a" exitCode=0 Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.224269 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thv2f" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.224785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerDied","Data":"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a"} Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.224843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thv2f" event={"ID":"e831648b-6e82-4059-ac8a-7fea8a98912f","Type":"ContainerDied","Data":"5bbd2267774a4ea2fed19cb136eb4e3882091da50e64824985e02f9ef4e5ea04"} Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.224871 4770 scope.go:117] "RemoveContainer" containerID="956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.256334 4770 scope.go:117] "RemoveContainer" containerID="6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.271908 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.282715 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thv2f"] Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.288665 4770 scope.go:117] "RemoveContainer" containerID="b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.344200 4770 scope.go:117] "RemoveContainer" containerID="956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a" Jan 26 19:27:22 crc kubenswrapper[4770]: E0126 19:27:22.344720 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a\": container with ID starting with 956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a not found: ID does not exist" containerID="956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.344748 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a"} err="failed to get container status \"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a\": rpc error: code = NotFound desc = could not find container \"956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a\": container with ID starting with 956c5d3801356e52c719348dc69bb9a0a593858070b7163be3a7ab7edbf1097a not found: ID does not exist" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.344767 4770 scope.go:117] "RemoveContainer" containerID="6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89" Jan 26 19:27:22 crc kubenswrapper[4770]: E0126 19:27:22.345023 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89\": container with ID starting with 6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89 not found: ID does not exist" containerID="6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.345043 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89"} err="failed to get container status \"6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89\": rpc error: code = NotFound desc = could not find container \"6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89\": container with ID starting with 6e7ca95518cd4ca3c03bb3af626542820002774c2e23840e985a7d41c296af89 not found: ID does not exist" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.345056 4770 scope.go:117] "RemoveContainer" containerID="b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26" Jan 26 19:27:22 crc kubenswrapper[4770]: E0126 19:27:22.345370 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26\": container with ID starting with b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26 not found: ID does not exist" containerID="b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26" Jan 26 19:27:22 crc kubenswrapper[4770]: I0126 19:27:22.345389 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26"} err="failed to get container status \"b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26\": rpc error: code = NotFound desc = could not find container \"b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26\": container with ID starting with b6e1bb88b7903549818b65a45715c1c667b6218fbc69a6811db6de6293fc7d26 not found: ID does not exist" Jan 26 19:27:23 crc kubenswrapper[4770]: I0126 19:27:23.780828 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" path="/var/lib/kubelet/pods/e831648b-6e82-4059-ac8a-7fea8a98912f/volumes" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.662059 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:24 crc kubenswrapper[4770]: E0126 19:27:24.662878 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="extract-utilities" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.662993 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="extract-utilities" Jan 26 19:27:24 crc kubenswrapper[4770]: E0126 19:27:24.663073 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="registry-server" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.663156 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="registry-server" Jan 26 19:27:24 crc kubenswrapper[4770]: E0126 19:27:24.663250 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="extract-content" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.663320 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="extract-content" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.663647 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e831648b-6e82-4059-ac8a-7fea8a98912f" containerName="registry-server" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.665537 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.672979 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.737109 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.737196 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znpcv\" (UniqueName: \"kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.737271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.839687 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.840170 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.840221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.840254 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znpcv\" (UniqueName: \"kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.840666 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.866159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znpcv\" (UniqueName: \"kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv\") pod \"redhat-marketplace-8l9tt\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:24 crc kubenswrapper[4770]: I0126 19:27:24.982125 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:25 crc kubenswrapper[4770]: I0126 19:27:25.499724 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.266048 4770 generic.go:334] "Generic (PLEG): container finished" podID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerID="d67b8700250bea6a7c24badf9359edf4ab98101dbb1f2666f0cd00d6284154b2" exitCode=0 Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.266108 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerDied","Data":"d67b8700250bea6a7c24badf9359edf4ab98101dbb1f2666f0cd00d6284154b2"} Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.266451 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerStarted","Data":"4a848d6a32c0260dc783a7f0896ed57c53c9757af3fc9f25ee404c88aabe56ba"} Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.464125 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.468073 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.475848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.475919 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkn62\" (UniqueName: \"kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.476073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.479420 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.578094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.578289 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.578345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkn62\" (UniqueName: \"kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.578778 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.579090 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.604751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkn62\" (UniqueName: \"kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62\") pod \"certified-operators-8cvpm\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:26 crc kubenswrapper[4770]: I0126 19:27:26.792147 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:27 crc kubenswrapper[4770]: I0126 19:27:27.295956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerStarted","Data":"2ead4db2208a8fbb8152f4beffb6880c6676d577144ebd4bb414dc39a7ddc79a"} Jan 26 19:27:27 crc kubenswrapper[4770]: I0126 19:27:27.375290 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:27 crc kubenswrapper[4770]: W0126 19:27:27.379867 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod421cecfb_eed2_46ab_8c05_82ef2d5fc5f6.slice/crio-453b63576334da23c06ffadcc071302f2cf4c972491d2eba25513a2ecfd41097 WatchSource:0}: Error finding container 453b63576334da23c06ffadcc071302f2cf4c972491d2eba25513a2ecfd41097: Status 404 returned error can't find the container with id 453b63576334da23c06ffadcc071302f2cf4c972491d2eba25513a2ecfd41097 Jan 26 19:27:28 crc kubenswrapper[4770]: I0126 19:27:28.310228 4770 generic.go:334] "Generic (PLEG): container finished" podID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerID="5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082" exitCode=0 Jan 26 19:27:28 crc kubenswrapper[4770]: I0126 19:27:28.310298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerDied","Data":"5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082"} Jan 26 19:27:28 crc kubenswrapper[4770]: I0126 19:27:28.310676 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerStarted","Data":"453b63576334da23c06ffadcc071302f2cf4c972491d2eba25513a2ecfd41097"} Jan 26 19:27:28 crc kubenswrapper[4770]: I0126 19:27:28.314500 4770 generic.go:334] "Generic (PLEG): container finished" podID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerID="2ead4db2208a8fbb8152f4beffb6880c6676d577144ebd4bb414dc39a7ddc79a" exitCode=0 Jan 26 19:27:28 crc kubenswrapper[4770]: I0126 19:27:28.314538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerDied","Data":"2ead4db2208a8fbb8152f4beffb6880c6676d577144ebd4bb414dc39a7ddc79a"} Jan 26 19:27:29 crc kubenswrapper[4770]: I0126 19:27:29.325998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerStarted","Data":"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5"} Jan 26 19:27:29 crc kubenswrapper[4770]: I0126 19:27:29.329392 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerStarted","Data":"110bd656cbcf7915a6a0beff54b0cdf5c1c1ad505531ed66b991a76437ba84a6"} Jan 26 19:27:29 crc kubenswrapper[4770]: I0126 19:27:29.382110 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8l9tt" podStartSLOduration=2.922251271 podStartE2EDuration="5.382082904s" podCreationTimestamp="2026-01-26 19:27:24 +0000 UTC" firstStartedPulling="2026-01-26 19:27:26.268240192 +0000 UTC m=+2730.833146924" lastFinishedPulling="2026-01-26 19:27:28.728071825 +0000 UTC m=+2733.292978557" observedRunningTime="2026-01-26 19:27:29.371857475 +0000 UTC m=+2733.936764217" watchObservedRunningTime="2026-01-26 19:27:29.382082904 +0000 UTC m=+2733.946989676" Jan 26 19:27:30 crc kubenswrapper[4770]: I0126 19:27:30.330793 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:27:30 crc kubenswrapper[4770]: I0126 19:27:30.331186 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:27:30 crc kubenswrapper[4770]: I0126 19:27:30.358067 4770 generic.go:334] "Generic (PLEG): container finished" podID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerID="d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5" exitCode=0 Jan 26 19:27:30 crc kubenswrapper[4770]: I0126 19:27:30.366855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerDied","Data":"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5"} Jan 26 19:27:31 crc kubenswrapper[4770]: I0126 19:27:31.369660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerStarted","Data":"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705"} Jan 26 19:27:31 crc kubenswrapper[4770]: I0126 19:27:31.397680 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8cvpm" podStartSLOduration=2.837983323 podStartE2EDuration="5.397663917s" podCreationTimestamp="2026-01-26 19:27:26 +0000 UTC" firstStartedPulling="2026-01-26 19:27:28.312872149 +0000 UTC m=+2732.877778891" lastFinishedPulling="2026-01-26 19:27:30.872552753 +0000 UTC m=+2735.437459485" observedRunningTime="2026-01-26 19:27:31.391993322 +0000 UTC m=+2735.956900064" watchObservedRunningTime="2026-01-26 19:27:31.397663917 +0000 UTC m=+2735.962570649" Jan 26 19:27:33 crc kubenswrapper[4770]: I0126 19:27:33.391106 4770 generic.go:334] "Generic (PLEG): container finished" podID="c54172aa-4886-49b2-8834-ea8e8c57306e" containerID="c876e2a8dd5af905fdb85cb169361339c6e4f445fb9435ebdadc2bb6a4ec81ec" exitCode=0 Jan 26 19:27:33 crc kubenswrapper[4770]: I0126 19:27:33.391178 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" event={"ID":"c54172aa-4886-49b2-8834-ea8e8c57306e","Type":"ContainerDied","Data":"c876e2a8dd5af905fdb85cb169361339c6e4f445fb9435ebdadc2bb6a4ec81ec"} Jan 26 19:27:34 crc kubenswrapper[4770]: I0126 19:27:34.872897 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:27:34 crc kubenswrapper[4770]: I0126 19:27:34.983203 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:34 crc kubenswrapper[4770]: I0126 19:27:34.983270 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.040843 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075019 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075079 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075182 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075204 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.075303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.076133 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.076226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn87h\" (UniqueName: \"kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h\") pod \"c54172aa-4886-49b2-8834-ea8e8c57306e\" (UID: \"c54172aa-4886-49b2-8834-ea8e8c57306e\") " Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.081070 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h" (OuterVolumeSpecName: "kube-api-access-xn87h") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "kube-api-access-xn87h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.084371 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.105489 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.115988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.117471 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.118551 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.130352 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.132109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory" (OuterVolumeSpecName: "inventory") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.133502 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c54172aa-4886-49b2-8834-ea8e8c57306e" (UID: "c54172aa-4886-49b2-8834-ea8e8c57306e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.178971 4770 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179019 4770 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179033 4770 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179046 4770 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179059 4770 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179072 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179083 4770 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179095 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c54172aa-4886-49b2-8834-ea8e8c57306e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.179107 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn87h\" (UniqueName: \"kubernetes.io/projected/c54172aa-4886-49b2-8834-ea8e8c57306e-kube-api-access-xn87h\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.413981 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" event={"ID":"c54172aa-4886-49b2-8834-ea8e8c57306e","Type":"ContainerDied","Data":"7f036e3edea5e5cf5011535867ae7aacc1d59485f4d62d62f8407ee76c7611bd"} Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.414009 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9qgt" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.414038 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f036e3edea5e5cf5011535867ae7aacc1d59485f4d62d62f8407ee76c7611bd" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.481180 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.636827 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8"] Jan 26 19:27:35 crc kubenswrapper[4770]: E0126 19:27:35.638031 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c54172aa-4886-49b2-8834-ea8e8c57306e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.638080 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c54172aa-4886-49b2-8834-ea8e8c57306e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.638651 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c54172aa-4886-49b2-8834-ea8e8c57306e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.640307 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.643169 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.643456 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.646994 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.647889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6725d" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.648161 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.665391 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8"] Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.689840 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.689981 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.690006 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.690080 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.690139 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvwc\" (UniqueName: \"kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.690180 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.690205 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793613 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793652 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793829 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvwc\" (UniqueName: \"kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793876 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793900 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.793939 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.797820 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.797948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.798457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.800560 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.814621 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.815746 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.817814 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvwc\" (UniqueName: \"kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.854351 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:35 crc kubenswrapper[4770]: I0126 19:27:35.967121 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:27:36 crc kubenswrapper[4770]: I0126 19:27:36.551497 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8"] Jan 26 19:27:36 crc kubenswrapper[4770]: I0126 19:27:36.792854 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:36 crc kubenswrapper[4770]: I0126 19:27:36.792928 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:36 crc kubenswrapper[4770]: I0126 19:27:36.881094 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:37 crc kubenswrapper[4770]: I0126 19:27:37.449375 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8l9tt" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="registry-server" containerID="cri-o://110bd656cbcf7915a6a0beff54b0cdf5c1c1ad505531ed66b991a76437ba84a6" gracePeriod=2 Jan 26 19:27:37 crc kubenswrapper[4770]: I0126 19:27:37.450899 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" event={"ID":"50064c0b-e5a3-46a3-9053-536fcbe380a3","Type":"ContainerStarted","Data":"1b46b353589c58084cf89badb16466c141ced909ba68b6b68710ec00ee4ee81d"} Jan 26 19:27:37 crc kubenswrapper[4770]: I0126 19:27:37.450933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" event={"ID":"50064c0b-e5a3-46a3-9053-536fcbe380a3","Type":"ContainerStarted","Data":"2abc969fd73b73bb13202ae07920d01f01e2edd601024e8fabf6f29bff76e18f"} Jan 26 19:27:37 crc kubenswrapper[4770]: I0126 19:27:37.499154 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" podStartSLOduration=2.038982343 podStartE2EDuration="2.4991388s" podCreationTimestamp="2026-01-26 19:27:35 +0000 UTC" firstStartedPulling="2026-01-26 19:27:36.553427641 +0000 UTC m=+2741.118334373" lastFinishedPulling="2026-01-26 19:27:37.013584078 +0000 UTC m=+2741.578490830" observedRunningTime="2026-01-26 19:27:37.493049653 +0000 UTC m=+2742.057956385" watchObservedRunningTime="2026-01-26 19:27:37.4991388 +0000 UTC m=+2742.064045522" Jan 26 19:27:37 crc kubenswrapper[4770]: I0126 19:27:37.545973 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.470015 4770 generic.go:334] "Generic (PLEG): container finished" podID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerID="110bd656cbcf7915a6a0beff54b0cdf5c1c1ad505531ed66b991a76437ba84a6" exitCode=0 Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.470498 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerDied","Data":"110bd656cbcf7915a6a0beff54b0cdf5c1c1ad505531ed66b991a76437ba84a6"} Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.553214 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.665945 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities\") pod \"29a8549d-1924-4553-bb59-98b3537b7c0f\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.665985 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znpcv\" (UniqueName: \"kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv\") pod \"29a8549d-1924-4553-bb59-98b3537b7c0f\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.666013 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content\") pod \"29a8549d-1924-4553-bb59-98b3537b7c0f\" (UID: \"29a8549d-1924-4553-bb59-98b3537b7c0f\") " Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.668035 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities" (OuterVolumeSpecName: "utilities") pod "29a8549d-1924-4553-bb59-98b3537b7c0f" (UID: "29a8549d-1924-4553-bb59-98b3537b7c0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.672422 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv" (OuterVolumeSpecName: "kube-api-access-znpcv") pod "29a8549d-1924-4553-bb59-98b3537b7c0f" (UID: "29a8549d-1924-4553-bb59-98b3537b7c0f"). InnerVolumeSpecName "kube-api-access-znpcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.688919 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a8549d-1924-4553-bb59-98b3537b7c0f" (UID: "29a8549d-1924-4553-bb59-98b3537b7c0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.768109 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.768261 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znpcv\" (UniqueName: \"kubernetes.io/projected/29a8549d-1924-4553-bb59-98b3537b7c0f-kube-api-access-znpcv\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:38 crc kubenswrapper[4770]: I0126 19:27:38.768317 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a8549d-1924-4553-bb59-98b3537b7c0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.256252 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.485237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8l9tt" event={"ID":"29a8549d-1924-4553-bb59-98b3537b7c0f","Type":"ContainerDied","Data":"4a848d6a32c0260dc783a7f0896ed57c53c9757af3fc9f25ee404c88aabe56ba"} Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.485302 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8l9tt" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.485323 4770 scope.go:117] "RemoveContainer" containerID="110bd656cbcf7915a6a0beff54b0cdf5c1c1ad505531ed66b991a76437ba84a6" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.485372 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8cvpm" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="registry-server" containerID="cri-o://3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705" gracePeriod=2 Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.516057 4770 scope.go:117] "RemoveContainer" containerID="2ead4db2208a8fbb8152f4beffb6880c6676d577144ebd4bb414dc39a7ddc79a" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.542315 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.556036 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8l9tt"] Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.602280 4770 scope.go:117] "RemoveContainer" containerID="d67b8700250bea6a7c24badf9359edf4ab98101dbb1f2666f0cd00d6284154b2" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.783688 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" path="/var/lib/kubelet/pods/29a8549d-1924-4553-bb59-98b3537b7c0f/volumes" Jan 26 19:27:39 crc kubenswrapper[4770]: I0126 19:27:39.994609 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.126361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content\") pod \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.126461 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities\") pod \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.126548 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkn62\" (UniqueName: \"kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62\") pod \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\" (UID: \"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6\") " Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.127477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities" (OuterVolumeSpecName: "utilities") pod "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" (UID: "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.134801 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62" (OuterVolumeSpecName: "kube-api-access-hkn62") pod "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" (UID: "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6"). InnerVolumeSpecName "kube-api-access-hkn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.175938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" (UID: "421cecfb-eed2-46ab-8c05-82ef2d5fc5f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.229232 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.229283 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.229298 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkn62\" (UniqueName: \"kubernetes.io/projected/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6-kube-api-access-hkn62\") on node \"crc\" DevicePath \"\"" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.498416 4770 generic.go:334] "Generic (PLEG): container finished" podID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerID="3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705" exitCode=0 Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.498461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerDied","Data":"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705"} Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.498487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cvpm" event={"ID":"421cecfb-eed2-46ab-8c05-82ef2d5fc5f6","Type":"ContainerDied","Data":"453b63576334da23c06ffadcc071302f2cf4c972491d2eba25513a2ecfd41097"} Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.498513 4770 scope.go:117] "RemoveContainer" containerID="3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.498534 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cvpm" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.538690 4770 scope.go:117] "RemoveContainer" containerID="d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.550877 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.561148 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8cvpm"] Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.573907 4770 scope.go:117] "RemoveContainer" containerID="5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.595289 4770 scope.go:117] "RemoveContainer" containerID="3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705" Jan 26 19:27:40 crc kubenswrapper[4770]: E0126 19:27:40.596369 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705\": container with ID starting with 3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705 not found: ID does not exist" containerID="3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.596422 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705"} err="failed to get container status \"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705\": rpc error: code = NotFound desc = could not find container \"3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705\": container with ID starting with 3975d28a62f525c2485a6ae5cfe852c7106224d006be6a3f35693dddd10e2705 not found: ID does not exist" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.596460 4770 scope.go:117] "RemoveContainer" containerID="d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5" Jan 26 19:27:40 crc kubenswrapper[4770]: E0126 19:27:40.597014 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5\": container with ID starting with d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5 not found: ID does not exist" containerID="d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.597052 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5"} err="failed to get container status \"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5\": rpc error: code = NotFound desc = could not find container \"d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5\": container with ID starting with d3af24edb3fc85d747b303d832917441715a2693670c486f13178fb7ad6df2a5 not found: ID does not exist" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.597078 4770 scope.go:117] "RemoveContainer" containerID="5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082" Jan 26 19:27:40 crc kubenswrapper[4770]: E0126 19:27:40.597432 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082\": container with ID starting with 5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082 not found: ID does not exist" containerID="5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082" Jan 26 19:27:40 crc kubenswrapper[4770]: I0126 19:27:40.597479 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082"} err="failed to get container status \"5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082\": rpc error: code = NotFound desc = could not find container \"5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082\": container with ID starting with 5909e7f78d185a5e7ae4741dcd2c2bd0d8cd657d1d89e03a6e3800f0487fd082 not found: ID does not exist" Jan 26 19:27:41 crc kubenswrapper[4770]: I0126 19:27:41.789762 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" path="/var/lib/kubelet/pods/421cecfb-eed2-46ab-8c05-82ef2d5fc5f6/volumes" Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.330790 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.331323 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.331373 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.331967 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.332033 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419" gracePeriod=600 Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.725111 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419" exitCode=0 Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.725302 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419"} Jan 26 19:28:00 crc kubenswrapper[4770]: I0126 19:28:00.725454 4770 scope.go:117] "RemoveContainer" containerID="8937450f037148e73f73d59d03d0eb26130940d975fc9e3afdbe5bc142f3ee7b" Jan 26 19:28:01 crc kubenswrapper[4770]: I0126 19:28:01.743440 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256"} Jan 26 19:29:58 crc kubenswrapper[4770]: I0126 19:29:58.085303 4770 generic.go:334] "Generic (PLEG): container finished" podID="50064c0b-e5a3-46a3-9053-536fcbe380a3" containerID="1b46b353589c58084cf89badb16466c141ced909ba68b6b68710ec00ee4ee81d" exitCode=0 Jan 26 19:29:58 crc kubenswrapper[4770]: I0126 19:29:58.085397 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" event={"ID":"50064c0b-e5a3-46a3-9053-536fcbe380a3","Type":"ContainerDied","Data":"1b46b353589c58084cf89badb16466c141ced909ba68b6b68710ec00ee4ee81d"} Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.607814 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.764714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.765233 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.765742 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.766006 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.766175 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.766388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.766521 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvwc\" (UniqueName: \"kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc\") pod \"50064c0b-e5a3-46a3-9053-536fcbe380a3\" (UID: \"50064c0b-e5a3-46a3-9053-536fcbe380a3\") " Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.773877 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc" (OuterVolumeSpecName: "kube-api-access-2nvwc") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "kube-api-access-2nvwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.777867 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.806616 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.820267 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.826866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory" (OuterVolumeSpecName: "inventory") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.831050 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.843888 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "50064c0b-e5a3-46a3-9053-536fcbe380a3" (UID: "50064c0b-e5a3-46a3-9053-536fcbe380a3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869593 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869624 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869634 4770 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869644 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869656 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869665 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/50064c0b-e5a3-46a3-9053-536fcbe380a3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 19:29:59 crc kubenswrapper[4770]: I0126 19:29:59.869673 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvwc\" (UniqueName: \"kubernetes.io/projected/50064c0b-e5a3-46a3-9053-536fcbe380a3-kube-api-access-2nvwc\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.116484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" event={"ID":"50064c0b-e5a3-46a3-9053-536fcbe380a3","Type":"ContainerDied","Data":"2abc969fd73b73bb13202ae07920d01f01e2edd601024e8fabf6f29bff76e18f"} Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.116524 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2abc969fd73b73bb13202ae07920d01f01e2edd601024e8fabf6f29bff76e18f" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.116522 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174399 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82"] Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.174848 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="extract-utilities" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174868 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="extract-utilities" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.174887 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174896 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.174921 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="extract-content" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174929 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="extract-content" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.174942 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="extract-content" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174950 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="extract-content" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.174988 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.174996 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.175012 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="extract-utilities" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.175021 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="extract-utilities" Jan 26 19:30:00 crc kubenswrapper[4770]: E0126 19:30:00.175034 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50064c0b-e5a3-46a3-9053-536fcbe380a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.175044 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="50064c0b-e5a3-46a3-9053-536fcbe380a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.175280 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a8549d-1924-4553-bb59-98b3537b7c0f" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.175307 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="50064c0b-e5a3-46a3-9053-536fcbe380a3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.175321 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="421cecfb-eed2-46ab-8c05-82ef2d5fc5f6" containerName="registry-server" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.176099 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.179380 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.179885 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.218001 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82"] Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.276776 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.277068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.277324 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs69m\" (UniqueName: \"kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.379792 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs69m\" (UniqueName: \"kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.379991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.380026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.382529 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.387132 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.399838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs69m\" (UniqueName: \"kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m\") pod \"collect-profiles-29490930-b2m82\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:00 crc kubenswrapper[4770]: I0126 19:30:00.505344 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:01 crc kubenswrapper[4770]: I0126 19:30:01.002537 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82"] Jan 26 19:30:01 crc kubenswrapper[4770]: W0126 19:30:01.006023 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15d079b2_ed45_425a_8682_50d0b1d00711.slice/crio-bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67 WatchSource:0}: Error finding container bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67: Status 404 returned error can't find the container with id bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67 Jan 26 19:30:01 crc kubenswrapper[4770]: I0126 19:30:01.126258 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" event={"ID":"15d079b2-ed45-425a-8682-50d0b1d00711","Type":"ContainerStarted","Data":"bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67"} Jan 26 19:30:02 crc kubenswrapper[4770]: I0126 19:30:02.141745 4770 generic.go:334] "Generic (PLEG): container finished" podID="15d079b2-ed45-425a-8682-50d0b1d00711" containerID="823c974bf07b9f0e7a657b2d221c428851373e4694b7e5e111dddf143b0183f9" exitCode=0 Jan 26 19:30:02 crc kubenswrapper[4770]: I0126 19:30:02.142137 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" event={"ID":"15d079b2-ed45-425a-8682-50d0b1d00711","Type":"ContainerDied","Data":"823c974bf07b9f0e7a657b2d221c428851373e4694b7e5e111dddf143b0183f9"} Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.619369 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.699249 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume\") pod \"15d079b2-ed45-425a-8682-50d0b1d00711\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.699355 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume\") pod \"15d079b2-ed45-425a-8682-50d0b1d00711\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.699537 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs69m\" (UniqueName: \"kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m\") pod \"15d079b2-ed45-425a-8682-50d0b1d00711\" (UID: \"15d079b2-ed45-425a-8682-50d0b1d00711\") " Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.700150 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume" (OuterVolumeSpecName: "config-volume") pod "15d079b2-ed45-425a-8682-50d0b1d00711" (UID: "15d079b2-ed45-425a-8682-50d0b1d00711"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.705017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m" (OuterVolumeSpecName: "kube-api-access-zs69m") pod "15d079b2-ed45-425a-8682-50d0b1d00711" (UID: "15d079b2-ed45-425a-8682-50d0b1d00711"). InnerVolumeSpecName "kube-api-access-zs69m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.705200 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15d079b2-ed45-425a-8682-50d0b1d00711" (UID: "15d079b2-ed45-425a-8682-50d0b1d00711"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.803728 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15d079b2-ed45-425a-8682-50d0b1d00711-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.803781 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15d079b2-ed45-425a-8682-50d0b1d00711-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:03 crc kubenswrapper[4770]: I0126 19:30:03.803795 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs69m\" (UniqueName: \"kubernetes.io/projected/15d079b2-ed45-425a-8682-50d0b1d00711-kube-api-access-zs69m\") on node \"crc\" DevicePath \"\"" Jan 26 19:30:04 crc kubenswrapper[4770]: I0126 19:30:04.168829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" event={"ID":"15d079b2-ed45-425a-8682-50d0b1d00711","Type":"ContainerDied","Data":"bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67"} Jan 26 19:30:04 crc kubenswrapper[4770]: I0126 19:30:04.168881 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda85c28650b99adae529b5eaec00ad86cbddfc8b929d59b5c102206b6dbef67" Jan 26 19:30:04 crc kubenswrapper[4770]: I0126 19:30:04.169049 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82" Jan 26 19:30:04 crc kubenswrapper[4770]: I0126 19:30:04.710304 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr"] Jan 26 19:30:04 crc kubenswrapper[4770]: I0126 19:30:04.722008 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490885-f2mqr"] Jan 26 19:30:05 crc kubenswrapper[4770]: I0126 19:30:05.776318 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d7e7ec-f398-4606-8122-8338323b36c4" path="/var/lib/kubelet/pods/04d7e7ec-f398-4606-8122-8338323b36c4/volumes" Jan 26 19:30:20 crc kubenswrapper[4770]: I0126 19:30:20.464550 4770 scope.go:117] "RemoveContainer" containerID="7a36f0dae3ac18b4ecbce87ef62c14789841b13caaf93e6187e83077e50922da" Jan 26 19:30:30 crc kubenswrapper[4770]: I0126 19:30:30.330785 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:30:30 crc kubenswrapper[4770]: I0126 19:30:30.331547 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.826028 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:30:35 crc kubenswrapper[4770]: E0126 19:30:35.827447 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d079b2-ed45-425a-8682-50d0b1d00711" containerName="collect-profiles" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.827474 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d079b2-ed45-425a-8682-50d0b1d00711" containerName="collect-profiles" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.827852 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d079b2-ed45-425a-8682-50d0b1d00711" containerName="collect-profiles" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.829477 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.831648 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.852043 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.920598 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-lib-modules\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921173 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-sys\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzx26\" (UniqueName: \"kubernetes.io/projected/0c995c7e-a30a-4482-98f4-1b88979f2702-kube-api-access-nzx26\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921293 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921466 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-run\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921521 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-scripts\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921558 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921609 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921675 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921741 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921782 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-dev\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.921866 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.924619 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.926983 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.931749 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 26 19:30:35 crc kubenswrapper[4770]: I0126 19:30:35.942481 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.032921 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.034537 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-lib-modules\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035137 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035154 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035187 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035228 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-sys\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035224 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-lib-modules\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035280 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-sys\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035357 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzx26\" (UniqueName: \"kubernetes.io/projected/0c995c7e-a30a-4482-98f4-1b88979f2702-kube-api-access-nzx26\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035402 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035422 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035453 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035469 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035488 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035507 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035532 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-run\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.035549 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-run\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.038812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-scripts\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039101 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039163 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039196 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039387 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039453 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039478 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbwl\" (UniqueName: \"kubernetes.io/projected/098c14a9-04f1-4bba-8770-cb3ba0add71e-kube-api-access-rpbwl\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039425 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-run\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039534 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039656 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039689 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-dev\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039836 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.039922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.040921 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-dev\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.041166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.041248 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.041822 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0c995c7e-a30a-4482-98f4-1b88979f2702-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.048913 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-scripts\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.071115 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.073614 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.075339 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.094454 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c995c7e-a30a-4482-98f4-1b88979f2702-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.119428 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzx26\" (UniqueName: \"kubernetes.io/projected/0c995c7e-a30a-4482-98f4-1b88979f2702-kube-api-access-nzx26\") pod \"cinder-backup-0\" (UID: \"0c995c7e-a30a-4482-98f4-1b88979f2702\") " pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145064 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145125 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145148 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145183 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6rt\" (UniqueName: \"kubernetes.io/projected/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-kube-api-access-dc6rt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145275 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145301 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145347 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-run\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145464 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145564 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145589 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbwl\" (UniqueName: \"kubernetes.io/projected/098c14a9-04f1-4bba-8770-cb3ba0add71e-kube-api-access-rpbwl\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145616 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145773 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145886 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145930 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145952 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.145970 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146016 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146180 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146201 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146220 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146239 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.146257 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-run\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.147166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.147449 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.147494 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.153850 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/098c14a9-04f1-4bba-8770-cb3ba0add71e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.156287 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.163076 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.164284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.184886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.185486 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbwl\" (UniqueName: \"kubernetes.io/projected/098c14a9-04f1-4bba-8770-cb3ba0add71e-kube-api-access-rpbwl\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.191318 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/098c14a9-04f1-4bba-8770-cb3ba0add71e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"098c14a9-04f1-4bba-8770-cb3ba0add71e\") " pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248125 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6rt\" (UniqueName: \"kubernetes.io/projected/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-kube-api-access-dc6rt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248201 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248267 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248292 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248337 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248362 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248376 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248446 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248510 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248525 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248564 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248679 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248760 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248799 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.248963 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.249066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.250201 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.250254 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.250283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.250387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.250423 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.255826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.256316 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.257263 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.257801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.284954 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.287317 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6rt\" (UniqueName: \"kubernetes.io/projected/c5b4494e-e5fd-4561-8b35-9993d10cbe6b-kube-api-access-dc6rt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c5b4494e-e5fd-4561-8b35-9993d10cbe6b\") " pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.392520 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.758175 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.762752 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:30:36 crc kubenswrapper[4770]: I0126 19:30:36.912679 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 26 19:30:36 crc kubenswrapper[4770]: W0126 19:30:36.914321 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod098c14a9_04f1_4bba_8770_cb3ba0add71e.slice/crio-e594d8938c45b406512068d712ac54ba0f51d6b226c1e9cabb74a8bd81b97025 WatchSource:0}: Error finding container e594d8938c45b406512068d712ac54ba0f51d6b226c1e9cabb74a8bd81b97025: Status 404 returned error can't find the container with id e594d8938c45b406512068d712ac54ba0f51d6b226c1e9cabb74a8bd81b97025 Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.014113 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.573920 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c5b4494e-e5fd-4561-8b35-9993d10cbe6b","Type":"ContainerStarted","Data":"5823d9b9cb7681fbd006355da648c56a9d810e08ebd900924d0b7f7fe8e2b78d"} Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.574289 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c5b4494e-e5fd-4561-8b35-9993d10cbe6b","Type":"ContainerStarted","Data":"06cbfcd97d27fabd44327b40954420e5265dd65594d67bd4e6b90763e9a3d708"} Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.577487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"098c14a9-04f1-4bba-8770-cb3ba0add71e","Type":"ContainerStarted","Data":"1eacb21e1e196a0041812ed2544ec180e811df2f21b2c60854fb19ee40374447"} Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.577527 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"098c14a9-04f1-4bba-8770-cb3ba0add71e","Type":"ContainerStarted","Data":"e594d8938c45b406512068d712ac54ba0f51d6b226c1e9cabb74a8bd81b97025"} Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.579204 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0c995c7e-a30a-4482-98f4-1b88979f2702","Type":"ContainerStarted","Data":"435453deb439359c020ec0fa3dfd5eb452a28a33d6de137e1599d649a2fd880c"} Jan 26 19:30:37 crc kubenswrapper[4770]: I0126 19:30:37.579229 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0c995c7e-a30a-4482-98f4-1b88979f2702","Type":"ContainerStarted","Data":"a0fc5e01fe06f1cb23f7b392cf67533ff742f9ba2c4fcb41d9cc7dc1b6fdc422"} Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.597453 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c5b4494e-e5fd-4561-8b35-9993d10cbe6b","Type":"ContainerStarted","Data":"5926598d3564bcaf430ff25beb4c966ca9bc0f639c2d10f6d77dfd784042eb67"} Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.601844 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"098c14a9-04f1-4bba-8770-cb3ba0add71e","Type":"ContainerStarted","Data":"817d0ac3cb479a77eccf7ce68bf7141811ac4b9c1d7cf4f57312b13bc0b78a34"} Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.604726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0c995c7e-a30a-4482-98f4-1b88979f2702","Type":"ContainerStarted","Data":"80716c354f1bf52bf3c62f035be37d9997d90b7eda7f29e94b604aa2d0bcfef2"} Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.623948 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.562233735 podStartE2EDuration="3.623930743s" podCreationTimestamp="2026-01-26 19:30:35 +0000 UTC" firstStartedPulling="2026-01-26 19:30:37.101463808 +0000 UTC m=+2921.666370530" lastFinishedPulling="2026-01-26 19:30:37.163160806 +0000 UTC m=+2921.728067538" observedRunningTime="2026-01-26 19:30:38.622327929 +0000 UTC m=+2923.187234671" watchObservedRunningTime="2026-01-26 19:30:38.623930743 +0000 UTC m=+2923.188837475" Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.660686 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.46743368 podStartE2EDuration="3.660667497s" podCreationTimestamp="2026-01-26 19:30:35 +0000 UTC" firstStartedPulling="2026-01-26 19:30:36.762533995 +0000 UTC m=+2921.327440727" lastFinishedPulling="2026-01-26 19:30:36.955767812 +0000 UTC m=+2921.520674544" observedRunningTime="2026-01-26 19:30:38.659395313 +0000 UTC m=+2923.224302075" watchObservedRunningTime="2026-01-26 19:30:38.660667497 +0000 UTC m=+2923.225574239" Jan 26 19:30:38 crc kubenswrapper[4770]: I0126 19:30:38.687860 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.450520879 podStartE2EDuration="3.687840681s" podCreationTimestamp="2026-01-26 19:30:35 +0000 UTC" firstStartedPulling="2026-01-26 19:30:36.922395329 +0000 UTC m=+2921.487302061" lastFinishedPulling="2026-01-26 19:30:37.159715131 +0000 UTC m=+2921.724621863" observedRunningTime="2026-01-26 19:30:38.686086384 +0000 UTC m=+2923.250993126" watchObservedRunningTime="2026-01-26 19:30:38.687840681 +0000 UTC m=+2923.252747423" Jan 26 19:30:41 crc kubenswrapper[4770]: I0126 19:30:41.163822 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 19:30:41 crc kubenswrapper[4770]: I0126 19:30:41.287165 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:41 crc kubenswrapper[4770]: I0126 19:30:41.393799 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:30:46 crc kubenswrapper[4770]: I0126 19:30:46.331099 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 19:30:46 crc kubenswrapper[4770]: I0126 19:30:46.560960 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 26 19:30:46 crc kubenswrapper[4770]: I0126 19:30:46.632018 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 26 19:31:00 crc kubenswrapper[4770]: I0126 19:31:00.330518 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:31:00 crc kubenswrapper[4770]: I0126 19:31:00.331180 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:31:30 crc kubenswrapper[4770]: I0126 19:31:30.330542 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:31:30 crc kubenswrapper[4770]: I0126 19:31:30.331223 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:31:30 crc kubenswrapper[4770]: I0126 19:31:30.331294 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:31:30 crc kubenswrapper[4770]: I0126 19:31:30.332446 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:31:30 crc kubenswrapper[4770]: I0126 19:31:30.332529 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" gracePeriod=600 Jan 26 19:31:30 crc kubenswrapper[4770]: E0126 19:31:30.456008 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:31:31 crc kubenswrapper[4770]: I0126 19:31:31.279458 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" exitCode=0 Jan 26 19:31:31 crc kubenswrapper[4770]: I0126 19:31:31.279528 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256"} Jan 26 19:31:31 crc kubenswrapper[4770]: I0126 19:31:31.279914 4770 scope.go:117] "RemoveContainer" containerID="51ba5f683ddfdfb2a03144d9fb048a07d8c7506062b10a748423fb653199a419" Jan 26 19:31:31 crc kubenswrapper[4770]: I0126 19:31:31.280687 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:31:31 crc kubenswrapper[4770]: E0126 19:31:31.281041 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.814342 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.817177 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.817271 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.967096 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.969815 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlwkz\" (UniqueName: \"kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:35 crc kubenswrapper[4770]: I0126 19:31:35.970099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.074381 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlwkz\" (UniqueName: \"kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.074478 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.074601 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.075250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.076112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.097053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlwkz\" (UniqueName: \"kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz\") pod \"community-operators-bvszz\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.148140 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:36 crc kubenswrapper[4770]: I0126 19:31:36.731406 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:37 crc kubenswrapper[4770]: I0126 19:31:37.350240 4770 generic.go:334] "Generic (PLEG): container finished" podID="a9afbf29-1b16-41ca-af4a-82a783503843" containerID="40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d" exitCode=0 Jan 26 19:31:37 crc kubenswrapper[4770]: I0126 19:31:37.350288 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerDied","Data":"40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d"} Jan 26 19:31:37 crc kubenswrapper[4770]: I0126 19:31:37.350614 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerStarted","Data":"fd6c8c20470e763089e6b0b7146bf74dd90ce4041b30b92077109c8990d3f02e"} Jan 26 19:31:38 crc kubenswrapper[4770]: I0126 19:31:38.366639 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerStarted","Data":"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690"} Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.039118 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.391990 4770 generic.go:334] "Generic (PLEG): container finished" podID="a9afbf29-1b16-41ca-af4a-82a783503843" containerID="ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690" exitCode=0 Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.392117 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerDied","Data":"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690"} Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.392796 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="prometheus" containerID="cri-o://eb6562b20a3a132052e1a9d98a952dd191f4f26f39c4f4f69a8e376bbcc54e50" gracePeriod=600 Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.392850 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="thanos-sidecar" containerID="cri-o://2ba749296766f922be27ee679648bf0d878341f7396138398596fe2c4b6c09c2" gracePeriod=600 Jan 26 19:31:40 crc kubenswrapper[4770]: I0126 19:31:40.392871 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="config-reloader" containerID="cri-o://a4d76a4495d70df695cd35c4c2377357b0f03c4e1b20fdcd7f9402bc7c642ac8" gracePeriod=600 Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.402878 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerStarted","Data":"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90"} Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405530 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerID="2ba749296766f922be27ee679648bf0d878341f7396138398596fe2c4b6c09c2" exitCode=0 Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405561 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerID="a4d76a4495d70df695cd35c4c2377357b0f03c4e1b20fdcd7f9402bc7c642ac8" exitCode=0 Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405571 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerID="eb6562b20a3a132052e1a9d98a952dd191f4f26f39c4f4f69a8e376bbcc54e50" exitCode=0 Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405595 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerDied","Data":"2ba749296766f922be27ee679648bf0d878341f7396138398596fe2c4b6c09c2"} Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405626 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerDied","Data":"a4d76a4495d70df695cd35c4c2377357b0f03c4e1b20fdcd7f9402bc7c642ac8"} Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405636 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerDied","Data":"eb6562b20a3a132052e1a9d98a952dd191f4f26f39c4f4f69a8e376bbcc54e50"} Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405646 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8b8f0de9-6829-4178-8fdb-647aeac4384d","Type":"ContainerDied","Data":"f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18"} Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.405656 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f13fe82d51bf1ea3edc12c2c9b641556b8be2aa09b120a2baa4d8451cc9ffa18" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.426434 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bvszz" podStartSLOduration=2.9745446429999998 podStartE2EDuration="6.426420156s" podCreationTimestamp="2026-01-26 19:31:35 +0000 UTC" firstStartedPulling="2026-01-26 19:31:37.352092403 +0000 UTC m=+2981.916999135" lastFinishedPulling="2026-01-26 19:31:40.803967916 +0000 UTC m=+2985.368874648" observedRunningTime="2026-01-26 19:31:41.424801372 +0000 UTC m=+2985.989708104" watchObservedRunningTime="2026-01-26 19:31:41.426420156 +0000 UTC m=+2985.991326888" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.440454 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.593452 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.593903 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.594792 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.594900 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.594943 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.594969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.594996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595022 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595074 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595109 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwtqk\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595141 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595204 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595240 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file\") pod \"8b8f0de9-6829-4178-8fdb-647aeac4384d\" (UID: \"8b8f0de9-6829-4178-8fdb-647aeac4384d\") " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.595275 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.596417 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.600058 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.611825 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.617674 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out" (OuterVolumeSpecName: "config-out") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.619038 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk" (OuterVolumeSpecName: "kube-api-access-rwtqk") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "kube-api-access-rwtqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.621000 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.625877 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config" (OuterVolumeSpecName: "config") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.627984 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.629002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.629841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.638636 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697913 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697941 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697954 4770 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697962 4770 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b8f0de9-6829-4178-8fdb-647aeac4384d-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697932 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697971 4770 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8b8f0de9-6829-4178-8fdb-647aeac4384d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.697991 4770 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.698003 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwtqk\" (UniqueName: \"kubernetes.io/projected/8b8f0de9-6829-4178-8fdb-647aeac4384d-kube-api-access-rwtqk\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.698013 4770 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.698024 4770 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.698036 4770 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.767755 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:31:41 crc kubenswrapper[4770]: E0126 19:31:41.768092 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.800543 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") on node \"crc\" " Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.804441 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config" (OuterVolumeSpecName: "web-config") pod "8b8f0de9-6829-4178-8fdb-647aeac4384d" (UID: "8b8f0de9-6829-4178-8fdb-647aeac4384d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.833309 4770 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.833458 4770 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91") on node "crc" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.902608 4770 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b8f0de9-6829-4178-8fdb-647aeac4384d-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:41 crc kubenswrapper[4770]: I0126 19:31:41.902640 4770 reconciler_common.go:293] "Volume detached for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.414938 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.464752 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.477473 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511010 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:42 crc kubenswrapper[4770]: E0126 19:31:42.511483 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="prometheus" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511508 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="prometheus" Jan 26 19:31:42 crc kubenswrapper[4770]: E0126 19:31:42.511539 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="thanos-sidecar" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511547 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="thanos-sidecar" Jan 26 19:31:42 crc kubenswrapper[4770]: E0126 19:31:42.511570 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="init-config-reloader" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511579 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="init-config-reloader" Jan 26 19:31:42 crc kubenswrapper[4770]: E0126 19:31:42.511589 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="config-reloader" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511595 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="config-reloader" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511913 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="prometheus" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511959 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="config-reloader" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.511969 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" containerName="thanos-sidecar" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.514309 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.520482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gqjgz" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.520826 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.520890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.521031 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.521144 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.521253 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.521892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.527365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.533237 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.618835 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.618885 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.618927 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619163 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619315 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619377 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619567 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/caa91c00-9169-4445-af73-064cb3a08a3a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sql92\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-kube-api-access-sql92\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619653 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.619838 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.721962 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/caa91c00-9169-4445-af73-064cb3a08a3a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sql92\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-kube-api-access-sql92\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722164 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722310 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.722528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.723488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.724081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.724618 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/caa91c00-9169-4445-af73-064cb3a08a3a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.730257 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.734749 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.734803 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bce0a61bb2b9f961be74694fe5f6cf0aff9e298c0837c7d91488158ec6fad94/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.735449 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/caa91c00-9169-4445-af73-064cb3a08a3a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.741666 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.746949 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.752170 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.754678 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.755174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.757946 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/caa91c00-9169-4445-af73-064cb3a08a3a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.765740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sql92\" (UniqueName: \"kubernetes.io/projected/caa91c00-9169-4445-af73-064cb3a08a3a-kube-api-access-sql92\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.808592 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bfa4e7c8-2a58-472d-83cd-715c11187f91\") pod \"prometheus-metric-storage-0\" (UID: \"caa91c00-9169-4445-af73-064cb3a08a3a\") " pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:42 crc kubenswrapper[4770]: I0126 19:31:42.872980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 26 19:31:43 crc kubenswrapper[4770]: W0126 19:31:43.372853 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaa91c00_9169_4445_af73_064cb3a08a3a.slice/crio-4ec0dfb4afd3b952fdc00eb36bfb7471ae723625f8bfd2b848f3190e1bead369 WatchSource:0}: Error finding container 4ec0dfb4afd3b952fdc00eb36bfb7471ae723625f8bfd2b848f3190e1bead369: Status 404 returned error can't find the container with id 4ec0dfb4afd3b952fdc00eb36bfb7471ae723625f8bfd2b848f3190e1bead369 Jan 26 19:31:43 crc kubenswrapper[4770]: I0126 19:31:43.374095 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 26 19:31:43 crc kubenswrapper[4770]: I0126 19:31:43.426079 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerStarted","Data":"4ec0dfb4afd3b952fdc00eb36bfb7471ae723625f8bfd2b848f3190e1bead369"} Jan 26 19:31:43 crc kubenswrapper[4770]: I0126 19:31:43.777163 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8f0de9-6829-4178-8fdb-647aeac4384d" path="/var/lib/kubelet/pods/8b8f0de9-6829-4178-8fdb-647aeac4384d/volumes" Jan 26 19:31:46 crc kubenswrapper[4770]: I0126 19:31:46.149710 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:46 crc kubenswrapper[4770]: I0126 19:31:46.150239 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:46 crc kubenswrapper[4770]: I0126 19:31:46.437911 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:46 crc kubenswrapper[4770]: I0126 19:31:46.509719 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:47 crc kubenswrapper[4770]: I0126 19:31:47.468067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerStarted","Data":"e13b07cf1b5104b6f25de02abace4c38ad548ff443abcbc7406a5fde20bc8120"} Jan 26 19:31:48 crc kubenswrapper[4770]: I0126 19:31:48.131403 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:48 crc kubenswrapper[4770]: I0126 19:31:48.477609 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bvszz" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="registry-server" containerID="cri-o://24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90" gracePeriod=2 Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.093882 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.151726 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content\") pod \"a9afbf29-1b16-41ca-af4a-82a783503843\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.151885 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities\") pod \"a9afbf29-1b16-41ca-af4a-82a783503843\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.151972 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlwkz\" (UniqueName: \"kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz\") pod \"a9afbf29-1b16-41ca-af4a-82a783503843\" (UID: \"a9afbf29-1b16-41ca-af4a-82a783503843\") " Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.152598 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities" (OuterVolumeSpecName: "utilities") pod "a9afbf29-1b16-41ca-af4a-82a783503843" (UID: "a9afbf29-1b16-41ca-af4a-82a783503843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.162335 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz" (OuterVolumeSpecName: "kube-api-access-dlwkz") pod "a9afbf29-1b16-41ca-af4a-82a783503843" (UID: "a9afbf29-1b16-41ca-af4a-82a783503843"). InnerVolumeSpecName "kube-api-access-dlwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.209682 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9afbf29-1b16-41ca-af4a-82a783503843" (UID: "a9afbf29-1b16-41ca-af4a-82a783503843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.254288 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.254322 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9afbf29-1b16-41ca-af4a-82a783503843-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.254331 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlwkz\" (UniqueName: \"kubernetes.io/projected/a9afbf29-1b16-41ca-af4a-82a783503843-kube-api-access-dlwkz\") on node \"crc\" DevicePath \"\"" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.486360 4770 generic.go:334] "Generic (PLEG): container finished" podID="a9afbf29-1b16-41ca-af4a-82a783503843" containerID="24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90" exitCode=0 Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.486396 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerDied","Data":"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90"} Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.486420 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvszz" event={"ID":"a9afbf29-1b16-41ca-af4a-82a783503843","Type":"ContainerDied","Data":"fd6c8c20470e763089e6b0b7146bf74dd90ce4041b30b92077109c8990d3f02e"} Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.486443 4770 scope.go:117] "RemoveContainer" containerID="24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.486466 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvszz" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.531052 4770 scope.go:117] "RemoveContainer" containerID="ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.533988 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.552131 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bvszz"] Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.555970 4770 scope.go:117] "RemoveContainer" containerID="40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.610686 4770 scope.go:117] "RemoveContainer" containerID="24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90" Jan 26 19:31:49 crc kubenswrapper[4770]: E0126 19:31:49.611316 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90\": container with ID starting with 24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90 not found: ID does not exist" containerID="24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.611370 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90"} err="failed to get container status \"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90\": rpc error: code = NotFound desc = could not find container \"24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90\": container with ID starting with 24e378f9be0c48c5a27bf5ed05cf7682b336a2c5b3f3e854c8d5344c2d716b90 not found: ID does not exist" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.611406 4770 scope.go:117] "RemoveContainer" containerID="ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690" Jan 26 19:31:49 crc kubenswrapper[4770]: E0126 19:31:49.611749 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690\": container with ID starting with ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690 not found: ID does not exist" containerID="ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.611776 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690"} err="failed to get container status \"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690\": rpc error: code = NotFound desc = could not find container \"ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690\": container with ID starting with ca0f0415641e667fa7ab6fddd29df0e4d569f5fc2f3af972ec440ab2d5da2690 not found: ID does not exist" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.611800 4770 scope.go:117] "RemoveContainer" containerID="40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d" Jan 26 19:31:49 crc kubenswrapper[4770]: E0126 19:31:49.612062 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d\": container with ID starting with 40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d not found: ID does not exist" containerID="40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.612090 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d"} err="failed to get container status \"40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d\": rpc error: code = NotFound desc = could not find container \"40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d\": container with ID starting with 40c8e56d57ce01c2de125e491e947de0a5d4b05c813c80149f75864bb61abe5d not found: ID does not exist" Jan 26 19:31:49 crc kubenswrapper[4770]: I0126 19:31:49.787902 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" path="/var/lib/kubelet/pods/a9afbf29-1b16-41ca-af4a-82a783503843/volumes" Jan 26 19:31:52 crc kubenswrapper[4770]: I0126 19:31:52.767896 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:31:52 crc kubenswrapper[4770]: E0126 19:31:52.768320 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:31:55 crc kubenswrapper[4770]: I0126 19:31:55.548724 4770 generic.go:334] "Generic (PLEG): container finished" podID="caa91c00-9169-4445-af73-064cb3a08a3a" containerID="e13b07cf1b5104b6f25de02abace4c38ad548ff443abcbc7406a5fde20bc8120" exitCode=0 Jan 26 19:31:55 crc kubenswrapper[4770]: I0126 19:31:55.548754 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerDied","Data":"e13b07cf1b5104b6f25de02abace4c38ad548ff443abcbc7406a5fde20bc8120"} Jan 26 19:31:56 crc kubenswrapper[4770]: I0126 19:31:56.562046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerStarted","Data":"da8559b6f37373313fc441bafbc34f1be6f28c298f01eb71c93fe430549c019e"} Jan 26 19:32:00 crc kubenswrapper[4770]: I0126 19:32:00.601125 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerStarted","Data":"ffa507409df40bcec3e5fbc2d6f779cf930440dcc5de493688e36df75559e00c"} Jan 26 19:32:00 crc kubenswrapper[4770]: I0126 19:32:00.601642 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"caa91c00-9169-4445-af73-064cb3a08a3a","Type":"ContainerStarted","Data":"6a690a3013e07746bffb8848b08eb5a98e12859b1892a382e866101fc64c077f"} Jan 26 19:32:00 crc kubenswrapper[4770]: I0126 19:32:00.650599 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.650577251 podStartE2EDuration="18.650577251s" podCreationTimestamp="2026-01-26 19:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 19:32:00.645758359 +0000 UTC m=+3005.210665111" watchObservedRunningTime="2026-01-26 19:32:00.650577251 +0000 UTC m=+3005.215484013" Jan 26 19:32:02 crc kubenswrapper[4770]: I0126 19:32:02.875830 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 26 19:32:03 crc kubenswrapper[4770]: I0126 19:32:03.767354 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:32:03 crc kubenswrapper[4770]: E0126 19:32:03.768133 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:32:12 crc kubenswrapper[4770]: I0126 19:32:12.873738 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 26 19:32:12 crc kubenswrapper[4770]: I0126 19:32:12.881972 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 26 19:32:13 crc kubenswrapper[4770]: I0126 19:32:13.758771 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 26 19:32:17 crc kubenswrapper[4770]: I0126 19:32:17.768998 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:32:17 crc kubenswrapper[4770]: E0126 19:32:17.769944 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:32:20 crc kubenswrapper[4770]: I0126 19:32:20.581437 4770 scope.go:117] "RemoveContainer" containerID="eb6562b20a3a132052e1a9d98a952dd191f4f26f39c4f4f69a8e376bbcc54e50" Jan 26 19:32:20 crc kubenswrapper[4770]: I0126 19:32:20.611221 4770 scope.go:117] "RemoveContainer" containerID="137ef6b5aa37f18a214456c1119563d2719b46867dddaeecc51a319ccfa30bbc" Jan 26 19:32:20 crc kubenswrapper[4770]: I0126 19:32:20.667309 4770 scope.go:117] "RemoveContainer" containerID="a4d76a4495d70df695cd35c4c2377357b0f03c4e1b20fdcd7f9402bc7c642ac8" Jan 26 19:32:20 crc kubenswrapper[4770]: I0126 19:32:20.726479 4770 scope.go:117] "RemoveContainer" containerID="2ba749296766f922be27ee679648bf0d878341f7396138398596fe2c4b6c09c2" Jan 26 19:32:28 crc kubenswrapper[4770]: I0126 19:32:28.768032 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:32:28 crc kubenswrapper[4770]: E0126 19:32:28.769026 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.595771 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:32:33 crc kubenswrapper[4770]: E0126 19:32:33.596990 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="extract-content" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.597013 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="extract-content" Jan 26 19:32:33 crc kubenswrapper[4770]: E0126 19:32:33.597039 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="registry-server" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.597047 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="registry-server" Jan 26 19:32:33 crc kubenswrapper[4770]: E0126 19:32:33.597076 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="extract-utilities" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.597085 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="extract-utilities" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.598281 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9afbf29-1b16-41ca-af4a-82a783503843" containerName="registry-server" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.599953 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.622564 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.622916 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.624913 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.628076 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.628252 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hkh56" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751184 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751272 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wq4\" (UniqueName: \"kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751300 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751377 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751425 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.751559 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853770 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wq4\" (UniqueName: \"kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853868 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853913 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.853988 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854521 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854567 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.854916 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.855370 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.855882 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.861459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.861966 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.862614 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.882563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wq4\" (UniqueName: \"kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.890883 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " pod="openstack/tempest-tests-tempest" Jan 26 19:32:33 crc kubenswrapper[4770]: I0126 19:32:33.933926 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 19:32:34 crc kubenswrapper[4770]: I0126 19:32:34.468151 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 26 19:32:34 crc kubenswrapper[4770]: I0126 19:32:34.618596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b864a6fc-56ae-4c06-ad45-4ca55e1afd91","Type":"ContainerStarted","Data":"2c0300b29e0ccfd6d30135d3cc6ca7a1311634e6d7918e7eaa26be971534e0ae"} Jan 26 19:32:42 crc kubenswrapper[4770]: I0126 19:32:42.767638 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:32:42 crc kubenswrapper[4770]: E0126 19:32:42.768439 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:32:45 crc kubenswrapper[4770]: I0126 19:32:45.798621 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b864a6fc-56ae-4c06-ad45-4ca55e1afd91","Type":"ContainerStarted","Data":"b46a2231cd68fe4e5cea96b1d49ebff0e75e85d23b4e74ff5b1e476ddd8d377d"} Jan 26 19:32:45 crc kubenswrapper[4770]: I0126 19:32:45.801227 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.865480485 podStartE2EDuration="13.801206968s" podCreationTimestamp="2026-01-26 19:32:32 +0000 UTC" firstStartedPulling="2026-01-26 19:32:34.476037149 +0000 UTC m=+3039.040943891" lastFinishedPulling="2026-01-26 19:32:44.411763632 +0000 UTC m=+3048.976670374" observedRunningTime="2026-01-26 19:32:45.791536934 +0000 UTC m=+3050.356443686" watchObservedRunningTime="2026-01-26 19:32:45.801206968 +0000 UTC m=+3050.366113710" Jan 26 19:32:55 crc kubenswrapper[4770]: I0126 19:32:55.781507 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:32:55 crc kubenswrapper[4770]: E0126 19:32:55.784277 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:33:08 crc kubenswrapper[4770]: I0126 19:33:08.768781 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:33:08 crc kubenswrapper[4770]: E0126 19:33:08.769754 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:33:21 crc kubenswrapper[4770]: I0126 19:33:21.768132 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:33:21 crc kubenswrapper[4770]: E0126 19:33:21.769494 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:33:33 crc kubenswrapper[4770]: I0126 19:33:33.768095 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:33:33 crc kubenswrapper[4770]: E0126 19:33:33.769393 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:33:45 crc kubenswrapper[4770]: I0126 19:33:45.781308 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:33:45 crc kubenswrapper[4770]: E0126 19:33:45.782284 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:33:59 crc kubenswrapper[4770]: I0126 19:33:59.767644 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:33:59 crc kubenswrapper[4770]: E0126 19:33:59.768409 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:34:10 crc kubenswrapper[4770]: I0126 19:34:10.767299 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:34:10 crc kubenswrapper[4770]: E0126 19:34:10.768209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:34:21 crc kubenswrapper[4770]: I0126 19:34:21.767972 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:34:21 crc kubenswrapper[4770]: E0126 19:34:21.769021 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:34:33 crc kubenswrapper[4770]: I0126 19:34:33.767752 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:34:33 crc kubenswrapper[4770]: E0126 19:34:33.769096 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:34:47 crc kubenswrapper[4770]: I0126 19:34:47.767214 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:34:47 crc kubenswrapper[4770]: E0126 19:34:47.768053 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:35:01 crc kubenswrapper[4770]: I0126 19:35:01.767992 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:35:01 crc kubenswrapper[4770]: E0126 19:35:01.768734 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:35:16 crc kubenswrapper[4770]: I0126 19:35:16.768074 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:35:16 crc kubenswrapper[4770]: E0126 19:35:16.769305 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:35:27 crc kubenswrapper[4770]: I0126 19:35:27.767062 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:35:27 crc kubenswrapper[4770]: E0126 19:35:27.769129 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:35:39 crc kubenswrapper[4770]: I0126 19:35:39.767308 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:35:39 crc kubenswrapper[4770]: E0126 19:35:39.768313 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:35:51 crc kubenswrapper[4770]: I0126 19:35:51.767300 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:35:51 crc kubenswrapper[4770]: E0126 19:35:51.768563 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:36:05 crc kubenswrapper[4770]: I0126 19:36:05.777505 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:36:05 crc kubenswrapper[4770]: E0126 19:36:05.780529 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:36:19 crc kubenswrapper[4770]: I0126 19:36:19.767924 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:36:19 crc kubenswrapper[4770]: E0126 19:36:19.769218 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:36:30 crc kubenswrapper[4770]: I0126 19:36:30.767163 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:36:31 crc kubenswrapper[4770]: I0126 19:36:31.363168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572"} Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.256207 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.259034 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.267076 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.379414 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.379488 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb65h\" (UniqueName: \"kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.379585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.481537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.481597 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb65h\" (UniqueName: \"kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.481659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.482016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.482193 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.508867 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb65h\" (UniqueName: \"kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h\") pod \"redhat-marketplace-fnlqs\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:14 crc kubenswrapper[4770]: I0126 19:38:14.588043 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:15 crc kubenswrapper[4770]: I0126 19:38:15.100283 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:15 crc kubenswrapper[4770]: I0126 19:38:15.359660 4770 generic.go:334] "Generic (PLEG): container finished" podID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerID="e601ea3804800139655c1a6c80d94d302b1a18f2f241eb87fb2708680bcfdd16" exitCode=0 Jan 26 19:38:15 crc kubenswrapper[4770]: I0126 19:38:15.359732 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerDied","Data":"e601ea3804800139655c1a6c80d94d302b1a18f2f241eb87fb2708680bcfdd16"} Jan 26 19:38:15 crc kubenswrapper[4770]: I0126 19:38:15.359775 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerStarted","Data":"87bffc5855294001c0b336eb7c596c1f331b84c25162b704bca1d43f95db2a88"} Jan 26 19:38:15 crc kubenswrapper[4770]: I0126 19:38:15.362890 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:38:16 crc kubenswrapper[4770]: I0126 19:38:16.377847 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerStarted","Data":"da34d3b9142c3b17f90f2e533798c9a37b5a11b5424f35b8c2e51393016fc2bb"} Jan 26 19:38:17 crc kubenswrapper[4770]: I0126 19:38:17.391989 4770 generic.go:334] "Generic (PLEG): container finished" podID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerID="da34d3b9142c3b17f90f2e533798c9a37b5a11b5424f35b8c2e51393016fc2bb" exitCode=0 Jan 26 19:38:17 crc kubenswrapper[4770]: I0126 19:38:17.392284 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerDied","Data":"da34d3b9142c3b17f90f2e533798c9a37b5a11b5424f35b8c2e51393016fc2bb"} Jan 26 19:38:18 crc kubenswrapper[4770]: I0126 19:38:18.416883 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerStarted","Data":"a6a3ee3688f6f4326911c7dbea130a0dbf416d888b3a604aef52018035371fac"} Jan 26 19:38:18 crc kubenswrapper[4770]: I0126 19:38:18.447389 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fnlqs" podStartSLOduration=1.773472368 podStartE2EDuration="4.447369196s" podCreationTimestamp="2026-01-26 19:38:14 +0000 UTC" firstStartedPulling="2026-01-26 19:38:15.362669008 +0000 UTC m=+3379.927575740" lastFinishedPulling="2026-01-26 19:38:18.036565786 +0000 UTC m=+3382.601472568" observedRunningTime="2026-01-26 19:38:18.441567737 +0000 UTC m=+3383.006474499" watchObservedRunningTime="2026-01-26 19:38:18.447369196 +0000 UTC m=+3383.012275938" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.622280 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.626221 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.648571 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.726169 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.726222 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxm2\" (UniqueName: \"kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.726455 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.829372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.829413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxm2\" (UniqueName: \"kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.829476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.830204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.830586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.866979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxm2\" (UniqueName: \"kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2\") pod \"redhat-operators-vz76s\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:20 crc kubenswrapper[4770]: I0126 19:38:20.949694 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:21 crc kubenswrapper[4770]: I0126 19:38:21.430678 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:21 crc kubenswrapper[4770]: W0126 19:38:21.448503 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e695328_f3d1_4f68_9993_0a97c2f7d255.slice/crio-e2440cedb4ad1a9ef3870d794fa88d00af430b14af4089692a48137550a6ead1 WatchSource:0}: Error finding container e2440cedb4ad1a9ef3870d794fa88d00af430b14af4089692a48137550a6ead1: Status 404 returned error can't find the container with id e2440cedb4ad1a9ef3870d794fa88d00af430b14af4089692a48137550a6ead1 Jan 26 19:38:22 crc kubenswrapper[4770]: I0126 19:38:22.456120 4770 generic.go:334] "Generic (PLEG): container finished" podID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerID="2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e" exitCode=0 Jan 26 19:38:22 crc kubenswrapper[4770]: I0126 19:38:22.456192 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerDied","Data":"2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e"} Jan 26 19:38:22 crc kubenswrapper[4770]: I0126 19:38:22.456455 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerStarted","Data":"e2440cedb4ad1a9ef3870d794fa88d00af430b14af4089692a48137550a6ead1"} Jan 26 19:38:24 crc kubenswrapper[4770]: I0126 19:38:24.493616 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerStarted","Data":"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c"} Jan 26 19:38:24 crc kubenswrapper[4770]: I0126 19:38:24.588142 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:24 crc kubenswrapper[4770]: I0126 19:38:24.588207 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:24 crc kubenswrapper[4770]: I0126 19:38:24.651689 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:25 crc kubenswrapper[4770]: I0126 19:38:25.592779 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.028086 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.030987 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.041153 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.161489 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.161538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmq8\" (UniqueName: \"kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.161901 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.264582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.264640 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmq8\" (UniqueName: \"kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.264785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.265441 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.265757 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.315612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmq8\" (UniqueName: \"kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8\") pod \"certified-operators-td7dg\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.376582 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.522171 4770 generic.go:334] "Generic (PLEG): container finished" podID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerID="740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c" exitCode=0 Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.522255 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerDied","Data":"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c"} Jan 26 19:38:26 crc kubenswrapper[4770]: I0126 19:38:26.987733 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:26 crc kubenswrapper[4770]: W0126 19:38:26.991015 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09507dd3_4540_4305_b4ab_a59ad0a371a0.slice/crio-395a1a5a766d2293a5277d59f9a96a500b50a80c4523825f9b46acdd4d802f33 WatchSource:0}: Error finding container 395a1a5a766d2293a5277d59f9a96a500b50a80c4523825f9b46acdd4d802f33: Status 404 returned error can't find the container with id 395a1a5a766d2293a5277d59f9a96a500b50a80c4523825f9b46acdd4d802f33 Jan 26 19:38:27 crc kubenswrapper[4770]: I0126 19:38:27.534789 4770 generic.go:334] "Generic (PLEG): container finished" podID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerID="60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4" exitCode=0 Jan 26 19:38:27 crc kubenswrapper[4770]: I0126 19:38:27.534839 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerDied","Data":"60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4"} Jan 26 19:38:27 crc kubenswrapper[4770]: I0126 19:38:27.535142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerStarted","Data":"395a1a5a766d2293a5277d59f9a96a500b50a80c4523825f9b46acdd4d802f33"} Jan 26 19:38:27 crc kubenswrapper[4770]: I0126 19:38:27.546576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerStarted","Data":"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209"} Jan 26 19:38:27 crc kubenswrapper[4770]: I0126 19:38:27.591834 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vz76s" podStartSLOduration=2.867493571 podStartE2EDuration="7.59181349s" podCreationTimestamp="2026-01-26 19:38:20 +0000 UTC" firstStartedPulling="2026-01-26 19:38:22.458735307 +0000 UTC m=+3387.023642049" lastFinishedPulling="2026-01-26 19:38:27.183055236 +0000 UTC m=+3391.747961968" observedRunningTime="2026-01-26 19:38:27.581053655 +0000 UTC m=+3392.145960387" watchObservedRunningTime="2026-01-26 19:38:27.59181349 +0000 UTC m=+3392.156720222" Jan 26 19:38:28 crc kubenswrapper[4770]: I0126 19:38:28.413105 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:28 crc kubenswrapper[4770]: I0126 19:38:28.413688 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fnlqs" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="registry-server" containerID="cri-o://a6a3ee3688f6f4326911c7dbea130a0dbf416d888b3a604aef52018035371fac" gracePeriod=2 Jan 26 19:38:28 crc kubenswrapper[4770]: I0126 19:38:28.558176 4770 generic.go:334] "Generic (PLEG): container finished" podID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerID="a6a3ee3688f6f4326911c7dbea130a0dbf416d888b3a604aef52018035371fac" exitCode=0 Jan 26 19:38:28 crc kubenswrapper[4770]: I0126 19:38:28.558247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerDied","Data":"a6a3ee3688f6f4326911c7dbea130a0dbf416d888b3a604aef52018035371fac"} Jan 26 19:38:28 crc kubenswrapper[4770]: I0126 19:38:28.560593 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerStarted","Data":"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129"} Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.038579 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.180262 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content\") pod \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.180450 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities\") pod \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.180574 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb65h\" (UniqueName: \"kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h\") pod \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\" (UID: \"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175\") " Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.181093 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities" (OuterVolumeSpecName: "utilities") pod "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" (UID: "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.181959 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.186331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h" (OuterVolumeSpecName: "kube-api-access-cb65h") pod "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" (UID: "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175"). InnerVolumeSpecName "kube-api-access-cb65h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.203484 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" (UID: "b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.283528 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb65h\" (UniqueName: \"kubernetes.io/projected/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-kube-api-access-cb65h\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.283728 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.577617 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnlqs" event={"ID":"b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175","Type":"ContainerDied","Data":"87bffc5855294001c0b336eb7c596c1f331b84c25162b704bca1d43f95db2a88"} Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.577733 4770 scope.go:117] "RemoveContainer" containerID="a6a3ee3688f6f4326911c7dbea130a0dbf416d888b3a604aef52018035371fac" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.577887 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnlqs" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.581746 4770 generic.go:334] "Generic (PLEG): container finished" podID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerID="47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129" exitCode=0 Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.581815 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerDied","Data":"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129"} Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.624562 4770 scope.go:117] "RemoveContainer" containerID="da34d3b9142c3b17f90f2e533798c9a37b5a11b5424f35b8c2e51393016fc2bb" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.654310 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.668624 4770 scope.go:117] "RemoveContainer" containerID="e601ea3804800139655c1a6c80d94d302b1a18f2f241eb87fb2708680bcfdd16" Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.682107 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnlqs"] Jan 26 19:38:29 crc kubenswrapper[4770]: I0126 19:38:29.793022 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" path="/var/lib/kubelet/pods/b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175/volumes" Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.330307 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.330364 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.595174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerStarted","Data":"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f"} Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.625854 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-td7dg" podStartSLOduration=3.180737053 podStartE2EDuration="5.62583105s" podCreationTimestamp="2026-01-26 19:38:25 +0000 UTC" firstStartedPulling="2026-01-26 19:38:27.543158529 +0000 UTC m=+3392.108065261" lastFinishedPulling="2026-01-26 19:38:29.988252526 +0000 UTC m=+3394.553159258" observedRunningTime="2026-01-26 19:38:30.617439652 +0000 UTC m=+3395.182346384" watchObservedRunningTime="2026-01-26 19:38:30.62583105 +0000 UTC m=+3395.190737782" Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.950016 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:30 crc kubenswrapper[4770]: I0126 19:38:30.950061 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:32 crc kubenswrapper[4770]: I0126 19:38:32.000582 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vz76s" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="registry-server" probeResult="failure" output=< Jan 26 19:38:32 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 19:38:32 crc kubenswrapper[4770]: > Jan 26 19:38:36 crc kubenswrapper[4770]: I0126 19:38:36.377734 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:36 crc kubenswrapper[4770]: I0126 19:38:36.379051 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:36 crc kubenswrapper[4770]: I0126 19:38:36.455626 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:36 crc kubenswrapper[4770]: I0126 19:38:36.720400 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:36 crc kubenswrapper[4770]: I0126 19:38:36.779711 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:38 crc kubenswrapper[4770]: I0126 19:38:38.680875 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-td7dg" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="registry-server" containerID="cri-o://f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f" gracePeriod=2 Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.214263 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.343368 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsmq8\" (UniqueName: \"kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8\") pod \"09507dd3-4540-4305-b4ab-a59ad0a371a0\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.343815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content\") pod \"09507dd3-4540-4305-b4ab-a59ad0a371a0\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.344294 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities\") pod \"09507dd3-4540-4305-b4ab-a59ad0a371a0\" (UID: \"09507dd3-4540-4305-b4ab-a59ad0a371a0\") " Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.344833 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities" (OuterVolumeSpecName: "utilities") pod "09507dd3-4540-4305-b4ab-a59ad0a371a0" (UID: "09507dd3-4540-4305-b4ab-a59ad0a371a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.345344 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.352194 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8" (OuterVolumeSpecName: "kube-api-access-rsmq8") pod "09507dd3-4540-4305-b4ab-a59ad0a371a0" (UID: "09507dd3-4540-4305-b4ab-a59ad0a371a0"). InnerVolumeSpecName "kube-api-access-rsmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.389643 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09507dd3-4540-4305-b4ab-a59ad0a371a0" (UID: "09507dd3-4540-4305-b4ab-a59ad0a371a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.448678 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsmq8\" (UniqueName: \"kubernetes.io/projected/09507dd3-4540-4305-b4ab-a59ad0a371a0-kube-api-access-rsmq8\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.449030 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09507dd3-4540-4305-b4ab-a59ad0a371a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.693354 4770 generic.go:334] "Generic (PLEG): container finished" podID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerID="f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f" exitCode=0 Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.693410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerDied","Data":"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f"} Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.693436 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-td7dg" event={"ID":"09507dd3-4540-4305-b4ab-a59ad0a371a0","Type":"ContainerDied","Data":"395a1a5a766d2293a5277d59f9a96a500b50a80c4523825f9b46acdd4d802f33"} Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.693433 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-td7dg" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.693455 4770 scope.go:117] "RemoveContainer" containerID="f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.734873 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.741956 4770 scope.go:117] "RemoveContainer" containerID="47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.746352 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-td7dg"] Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.761996 4770 scope.go:117] "RemoveContainer" containerID="60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.784399 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" path="/var/lib/kubelet/pods/09507dd3-4540-4305-b4ab-a59ad0a371a0/volumes" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.808565 4770 scope.go:117] "RemoveContainer" containerID="f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f" Jan 26 19:38:39 crc kubenswrapper[4770]: E0126 19:38:39.809089 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f\": container with ID starting with f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f not found: ID does not exist" containerID="f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.809203 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f"} err="failed to get container status \"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f\": rpc error: code = NotFound desc = could not find container \"f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f\": container with ID starting with f7a7d1dca152b073f3694f84be1030136126c1de3f4d64ce98e9e5ec75a48c5f not found: ID does not exist" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.809291 4770 scope.go:117] "RemoveContainer" containerID="47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129" Jan 26 19:38:39 crc kubenswrapper[4770]: E0126 19:38:39.809717 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129\": container with ID starting with 47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129 not found: ID does not exist" containerID="47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.809758 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129"} err="failed to get container status \"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129\": rpc error: code = NotFound desc = could not find container \"47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129\": container with ID starting with 47fe4e9cfa72cb7c1a5fe37b8678b55afdfaed317c2fe93fa0c73884babd8129 not found: ID does not exist" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.809785 4770 scope.go:117] "RemoveContainer" containerID="60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4" Jan 26 19:38:39 crc kubenswrapper[4770]: E0126 19:38:39.810073 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4\": container with ID starting with 60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4 not found: ID does not exist" containerID="60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4" Jan 26 19:38:39 crc kubenswrapper[4770]: I0126 19:38:39.810154 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4"} err="failed to get container status \"60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4\": rpc error: code = NotFound desc = could not find container \"60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4\": container with ID starting with 60b2df4363f71172a829a672e940c7be6f56dc9a56635b3f1ab2a9d3fe74d5b4 not found: ID does not exist" Jan 26 19:38:41 crc kubenswrapper[4770]: I0126 19:38:41.022762 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:41 crc kubenswrapper[4770]: I0126 19:38:41.105006 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:42 crc kubenswrapper[4770]: I0126 19:38:42.105380 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:42 crc kubenswrapper[4770]: I0126 19:38:42.732724 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vz76s" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="registry-server" containerID="cri-o://22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209" gracePeriod=2 Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.221272 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.343256 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxm2\" (UniqueName: \"kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2\") pod \"8e695328-f3d1-4f68-9993-0a97c2f7d255\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.343424 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities\") pod \"8e695328-f3d1-4f68-9993-0a97c2f7d255\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.343559 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content\") pod \"8e695328-f3d1-4f68-9993-0a97c2f7d255\" (UID: \"8e695328-f3d1-4f68-9993-0a97c2f7d255\") " Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.344119 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities" (OuterVolumeSpecName: "utilities") pod "8e695328-f3d1-4f68-9993-0a97c2f7d255" (UID: "8e695328-f3d1-4f68-9993-0a97c2f7d255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.350967 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2" (OuterVolumeSpecName: "kube-api-access-6zxm2") pod "8e695328-f3d1-4f68-9993-0a97c2f7d255" (UID: "8e695328-f3d1-4f68-9993-0a97c2f7d255"). InnerVolumeSpecName "kube-api-access-6zxm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.445818 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zxm2\" (UniqueName: \"kubernetes.io/projected/8e695328-f3d1-4f68-9993-0a97c2f7d255-kube-api-access-6zxm2\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.445983 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.473721 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e695328-f3d1-4f68-9993-0a97c2f7d255" (UID: "8e695328-f3d1-4f68-9993-0a97c2f7d255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.547919 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e695328-f3d1-4f68-9993-0a97c2f7d255-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.749335 4770 generic.go:334] "Generic (PLEG): container finished" podID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerID="22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209" exitCode=0 Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.749396 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerDied","Data":"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209"} Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.749438 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vz76s" event={"ID":"8e695328-f3d1-4f68-9993-0a97c2f7d255","Type":"ContainerDied","Data":"e2440cedb4ad1a9ef3870d794fa88d00af430b14af4089692a48137550a6ead1"} Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.749402 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vz76s" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.749470 4770 scope.go:117] "RemoveContainer" containerID="22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.786009 4770 scope.go:117] "RemoveContainer" containerID="740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.812687 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.824292 4770 scope.go:117] "RemoveContainer" containerID="2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.824986 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vz76s"] Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.859302 4770 scope.go:117] "RemoveContainer" containerID="22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209" Jan 26 19:38:43 crc kubenswrapper[4770]: E0126 19:38:43.859654 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209\": container with ID starting with 22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209 not found: ID does not exist" containerID="22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.859685 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209"} err="failed to get container status \"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209\": rpc error: code = NotFound desc = could not find container \"22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209\": container with ID starting with 22d6d3095442189dce8d1ceabf821d7f3946c776d4875e0dbd9e9b3861729209 not found: ID does not exist" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.859725 4770 scope.go:117] "RemoveContainer" containerID="740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c" Jan 26 19:38:43 crc kubenswrapper[4770]: E0126 19:38:43.859990 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c\": container with ID starting with 740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c not found: ID does not exist" containerID="740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.860015 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c"} err="failed to get container status \"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c\": rpc error: code = NotFound desc = could not find container \"740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c\": container with ID starting with 740ddbe3ccb5edf943958f67945cc22d7a31719448bb5a6ab959776b78fa853c not found: ID does not exist" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.860031 4770 scope.go:117] "RemoveContainer" containerID="2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e" Jan 26 19:38:43 crc kubenswrapper[4770]: E0126 19:38:43.860233 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e\": container with ID starting with 2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e not found: ID does not exist" containerID="2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e" Jan 26 19:38:43 crc kubenswrapper[4770]: I0126 19:38:43.860258 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e"} err="failed to get container status \"2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e\": rpc error: code = NotFound desc = could not find container \"2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e\": container with ID starting with 2f5323baf6141c15959fec9d8cc4abb30703f27570d9542b3d76cd4d68efc42e not found: ID does not exist" Jan 26 19:38:45 crc kubenswrapper[4770]: I0126 19:38:45.788167 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" path="/var/lib/kubelet/pods/8e695328-f3d1-4f68-9993-0a97c2f7d255/volumes" Jan 26 19:39:00 crc kubenswrapper[4770]: I0126 19:39:00.330745 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:39:00 crc kubenswrapper[4770]: I0126 19:39:00.331235 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:39:30 crc kubenswrapper[4770]: I0126 19:39:30.331111 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:39:30 crc kubenswrapper[4770]: I0126 19:39:30.331708 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:39:30 crc kubenswrapper[4770]: I0126 19:39:30.331796 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:39:30 crc kubenswrapper[4770]: I0126 19:39:30.333088 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:39:30 crc kubenswrapper[4770]: I0126 19:39:30.333190 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572" gracePeriod=600 Jan 26 19:39:31 crc kubenswrapper[4770]: I0126 19:39:31.285954 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572" exitCode=0 Jan 26 19:39:31 crc kubenswrapper[4770]: I0126 19:39:31.286034 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572"} Jan 26 19:39:31 crc kubenswrapper[4770]: I0126 19:39:31.286423 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105"} Jan 26 19:39:31 crc kubenswrapper[4770]: I0126 19:39:31.286446 4770 scope.go:117] "RemoveContainer" containerID="0b4714f81337f572d126b363c96a81a44a69cdaf8e84adfec8363383b713d256" Jan 26 19:41:30 crc kubenswrapper[4770]: I0126 19:41:30.331224 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:41:30 crc kubenswrapper[4770]: I0126 19:41:30.331994 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.546772 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547807 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547827 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547848 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547856 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547871 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547879 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547891 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547898 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547917 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547924 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547935 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547941 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547956 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547962 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="extract-utilities" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.547983 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.547990 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: E0126 19:41:48.548004 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.548011 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="extract-content" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.548241 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23e9fae-0a8b-4f3a-bd60-ac90d6c5f175" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.548263 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="09507dd3-4540-4305-b4ab-a59ad0a371a0" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.548275 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e695328-f3d1-4f68-9993-0a97c2f7d255" containerName="registry-server" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.549901 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.573117 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.600094 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctcn\" (UniqueName: \"kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.600195 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.600278 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.702360 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.702460 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.702560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctcn\" (UniqueName: \"kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.703085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.703092 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.723742 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctcn\" (UniqueName: \"kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn\") pod \"community-operators-99vkg\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:48 crc kubenswrapper[4770]: I0126 19:41:48.870800 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:49 crc kubenswrapper[4770]: I0126 19:41:49.372074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:41:49 crc kubenswrapper[4770]: I0126 19:41:49.822197 4770 generic.go:334] "Generic (PLEG): container finished" podID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerID="c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd" exitCode=0 Jan 26 19:41:49 crc kubenswrapper[4770]: I0126 19:41:49.822260 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerDied","Data":"c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd"} Jan 26 19:41:49 crc kubenswrapper[4770]: I0126 19:41:49.822501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerStarted","Data":"f75231eb940090dad700b5d6280621f61b75821053f87013ebe66630141a5658"} Jan 26 19:41:50 crc kubenswrapper[4770]: I0126 19:41:50.835054 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerStarted","Data":"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2"} Jan 26 19:41:51 crc kubenswrapper[4770]: I0126 19:41:51.845882 4770 generic.go:334] "Generic (PLEG): container finished" podID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerID="944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2" exitCode=0 Jan 26 19:41:51 crc kubenswrapper[4770]: I0126 19:41:51.845981 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerDied","Data":"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2"} Jan 26 19:41:52 crc kubenswrapper[4770]: I0126 19:41:52.859804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerStarted","Data":"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893"} Jan 26 19:41:52 crc kubenswrapper[4770]: I0126 19:41:52.887555 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99vkg" podStartSLOduration=2.43133258 podStartE2EDuration="4.887531262s" podCreationTimestamp="2026-01-26 19:41:48 +0000 UTC" firstStartedPulling="2026-01-26 19:41:49.824465478 +0000 UTC m=+3594.389372220" lastFinishedPulling="2026-01-26 19:41:52.28066417 +0000 UTC m=+3596.845570902" observedRunningTime="2026-01-26 19:41:52.875312858 +0000 UTC m=+3597.440219600" watchObservedRunningTime="2026-01-26 19:41:52.887531262 +0000 UTC m=+3597.452438004" Jan 26 19:41:58 crc kubenswrapper[4770]: I0126 19:41:58.871682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:58 crc kubenswrapper[4770]: I0126 19:41:58.872137 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:58 crc kubenswrapper[4770]: I0126 19:41:58.942502 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:59 crc kubenswrapper[4770]: I0126 19:41:59.014770 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:41:59 crc kubenswrapper[4770]: I0126 19:41:59.198549 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:42:00 crc kubenswrapper[4770]: I0126 19:42:00.330773 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:42:00 crc kubenswrapper[4770]: I0126 19:42:00.331167 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:42:00 crc kubenswrapper[4770]: I0126 19:42:00.937245 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99vkg" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="registry-server" containerID="cri-o://c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893" gracePeriod=2 Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.444594 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.488450 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content\") pod \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.488760 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctcn\" (UniqueName: \"kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn\") pod \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.488868 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities\") pod \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\" (UID: \"8a1ce3fa-0cca-429b-bacf-209dc002e6cf\") " Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.489586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities" (OuterVolumeSpecName: "utilities") pod "8a1ce3fa-0cca-429b-bacf-209dc002e6cf" (UID: "8a1ce3fa-0cca-429b-bacf-209dc002e6cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.502873 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn" (OuterVolumeSpecName: "kube-api-access-7ctcn") pod "8a1ce3fa-0cca-429b-bacf-209dc002e6cf" (UID: "8a1ce3fa-0cca-429b-bacf-209dc002e6cf"). InnerVolumeSpecName "kube-api-access-7ctcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.590330 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctcn\" (UniqueName: \"kubernetes.io/projected/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-kube-api-access-7ctcn\") on node \"crc\" DevicePath \"\"" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.590365 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.774348 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a1ce3fa-0cca-429b-bacf-209dc002e6cf" (UID: "8a1ce3fa-0cca-429b-bacf-209dc002e6cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.793628 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a1ce3fa-0cca-429b-bacf-209dc002e6cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.946409 4770 generic.go:334] "Generic (PLEG): container finished" podID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerID="c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893" exitCode=0 Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.946455 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerDied","Data":"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893"} Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.946485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99vkg" event={"ID":"8a1ce3fa-0cca-429b-bacf-209dc002e6cf","Type":"ContainerDied","Data":"f75231eb940090dad700b5d6280621f61b75821053f87013ebe66630141a5658"} Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.946507 4770 scope.go:117] "RemoveContainer" containerID="c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.946639 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99vkg" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.966411 4770 scope.go:117] "RemoveContainer" containerID="944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2" Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.970477 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.980191 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99vkg"] Jan 26 19:42:01 crc kubenswrapper[4770]: I0126 19:42:01.990554 4770 scope.go:117] "RemoveContainer" containerID="c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.029627 4770 scope.go:117] "RemoveContainer" containerID="c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893" Jan 26 19:42:02 crc kubenswrapper[4770]: E0126 19:42:02.030461 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893\": container with ID starting with c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893 not found: ID does not exist" containerID="c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.030508 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893"} err="failed to get container status \"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893\": rpc error: code = NotFound desc = could not find container \"c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893\": container with ID starting with c1856c27617e6e6f3b50fab75a8a3aa1487dd49c7a6bbb6d50edc3769a3f7893 not found: ID does not exist" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.030535 4770 scope.go:117] "RemoveContainer" containerID="944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2" Jan 26 19:42:02 crc kubenswrapper[4770]: E0126 19:42:02.030999 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2\": container with ID starting with 944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2 not found: ID does not exist" containerID="944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.031085 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2"} err="failed to get container status \"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2\": rpc error: code = NotFound desc = could not find container \"944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2\": container with ID starting with 944f0ab35a11360067b33f2b9c64b311f59ce2c510fa64d691d12056649128e2 not found: ID does not exist" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.031165 4770 scope.go:117] "RemoveContainer" containerID="c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd" Jan 26 19:42:02 crc kubenswrapper[4770]: E0126 19:42:02.031484 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd\": container with ID starting with c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd not found: ID does not exist" containerID="c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd" Jan 26 19:42:02 crc kubenswrapper[4770]: I0126 19:42:02.031562 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd"} err="failed to get container status \"c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd\": rpc error: code = NotFound desc = could not find container \"c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd\": container with ID starting with c63ccddc9d9421ff38d0db75970ca3a2b5fe597441788e2e922a2d19d1ad1fbd not found: ID does not exist" Jan 26 19:42:03 crc kubenswrapper[4770]: I0126 19:42:03.779259 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" path="/var/lib/kubelet/pods/8a1ce3fa-0cca-429b-bacf-209dc002e6cf/volumes" Jan 26 19:42:30 crc kubenswrapper[4770]: I0126 19:42:30.330513 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:42:30 crc kubenswrapper[4770]: I0126 19:42:30.331334 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:42:30 crc kubenswrapper[4770]: I0126 19:42:30.331427 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:42:30 crc kubenswrapper[4770]: I0126 19:42:30.332831 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:42:30 crc kubenswrapper[4770]: I0126 19:42:30.332944 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" gracePeriod=600 Jan 26 19:42:30 crc kubenswrapper[4770]: E0126 19:42:30.470209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:42:31 crc kubenswrapper[4770]: I0126 19:42:31.287301 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" exitCode=0 Jan 26 19:42:31 crc kubenswrapper[4770]: I0126 19:42:31.287654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105"} Jan 26 19:42:31 crc kubenswrapper[4770]: I0126 19:42:31.287689 4770 scope.go:117] "RemoveContainer" containerID="eead25c2b312d379ddf631e3ba696f04949854f3f20ae879f89eef7502e82572" Jan 26 19:42:31 crc kubenswrapper[4770]: I0126 19:42:31.288461 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:42:31 crc kubenswrapper[4770]: E0126 19:42:31.288744 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:42:45 crc kubenswrapper[4770]: I0126 19:42:45.780978 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:42:45 crc kubenswrapper[4770]: E0126 19:42:45.781944 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:42:57 crc kubenswrapper[4770]: I0126 19:42:57.767218 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:42:57 crc kubenswrapper[4770]: E0126 19:42:57.768015 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:43:10 crc kubenswrapper[4770]: I0126 19:43:10.767442 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:43:10 crc kubenswrapper[4770]: E0126 19:43:10.768493 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:43:21 crc kubenswrapper[4770]: I0126 19:43:21.768150 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:43:21 crc kubenswrapper[4770]: E0126 19:43:21.769374 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:43:32 crc kubenswrapper[4770]: I0126 19:43:32.769340 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:43:32 crc kubenswrapper[4770]: E0126 19:43:32.770900 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:43:43 crc kubenswrapper[4770]: I0126 19:43:43.768185 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:43:43 crc kubenswrapper[4770]: E0126 19:43:43.770080 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:43:56 crc kubenswrapper[4770]: I0126 19:43:56.767092 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:43:56 crc kubenswrapper[4770]: E0126 19:43:56.769443 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:44:10 crc kubenswrapper[4770]: I0126 19:44:10.767951 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:44:10 crc kubenswrapper[4770]: E0126 19:44:10.769219 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:44:24 crc kubenswrapper[4770]: I0126 19:44:24.767860 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:44:24 crc kubenswrapper[4770]: E0126 19:44:24.768921 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:44:35 crc kubenswrapper[4770]: I0126 19:44:35.781376 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:44:35 crc kubenswrapper[4770]: E0126 19:44:35.782898 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:44:50 crc kubenswrapper[4770]: I0126 19:44:50.767310 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:44:50 crc kubenswrapper[4770]: E0126 19:44:50.768628 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.176496 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl"] Jan 26 19:45:00 crc kubenswrapper[4770]: E0126 19:45:00.177415 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.177436 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="extract-utilities" Jan 26 19:45:00 crc kubenswrapper[4770]: E0126 19:45:00.177449 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.177455 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="extract-content" Jan 26 19:45:00 crc kubenswrapper[4770]: E0126 19:45:00.177481 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.177486 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.177672 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a1ce3fa-0cca-429b-bacf-209dc002e6cf" containerName="registry-server" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.178375 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.181075 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.185186 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.201636 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl"] Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.298769 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5kz\" (UniqueName: \"kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.298922 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.298969 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.400987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.401036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.401199 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5kz\" (UniqueName: \"kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.402480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.411074 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.427111 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5kz\" (UniqueName: \"kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz\") pod \"collect-profiles-29490945-vhqsl\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:00 crc kubenswrapper[4770]: I0126 19:45:00.534791 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:01 crc kubenswrapper[4770]: I0126 19:45:01.025635 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl"] Jan 26 19:45:01 crc kubenswrapper[4770]: I0126 19:45:01.078588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" event={"ID":"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423","Type":"ContainerStarted","Data":"ef772140c78dfbf76ad0c511f37255db532cd6422e560953e6ad995effe484ed"} Jan 26 19:45:02 crc kubenswrapper[4770]: I0126 19:45:02.090663 4770 generic.go:334] "Generic (PLEG): container finished" podID="85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" containerID="0aa312cb75aec03228b173dcbb9353dd1d9325ce6ae5f1e9f096429ef6222c1e" exitCode=0 Jan 26 19:45:02 crc kubenswrapper[4770]: I0126 19:45:02.090777 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" event={"ID":"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423","Type":"ContainerDied","Data":"0aa312cb75aec03228b173dcbb9353dd1d9325ce6ae5f1e9f096429ef6222c1e"} Jan 26 19:45:02 crc kubenswrapper[4770]: I0126 19:45:02.767950 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:45:02 crc kubenswrapper[4770]: E0126 19:45:02.768782 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.466578 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.573552 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume\") pod \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.573842 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5kz\" (UniqueName: \"kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz\") pod \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.573871 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume\") pod \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\" (UID: \"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423\") " Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.574042 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume" (OuterVolumeSpecName: "config-volume") pod "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" (UID: "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.574356 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.581311 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz" (OuterVolumeSpecName: "kube-api-access-9d5kz") pod "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" (UID: "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423"). InnerVolumeSpecName "kube-api-access-9d5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.583798 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" (UID: "85697e5b-bd2d-48c9-a0f8-62d5ba1f4423"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.676483 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d5kz\" (UniqueName: \"kubernetes.io/projected/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-kube-api-access-9d5kz\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:03 crc kubenswrapper[4770]: I0126 19:45:03.676520 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 19:45:04 crc kubenswrapper[4770]: I0126 19:45:04.112450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" event={"ID":"85697e5b-bd2d-48c9-a0f8-62d5ba1f4423","Type":"ContainerDied","Data":"ef772140c78dfbf76ad0c511f37255db532cd6422e560953e6ad995effe484ed"} Jan 26 19:45:04 crc kubenswrapper[4770]: I0126 19:45:04.112510 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef772140c78dfbf76ad0c511f37255db532cd6422e560953e6ad995effe484ed" Jan 26 19:45:04 crc kubenswrapper[4770]: I0126 19:45:04.112604 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl" Jan 26 19:45:04 crc kubenswrapper[4770]: I0126 19:45:04.578631 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt"] Jan 26 19:45:04 crc kubenswrapper[4770]: I0126 19:45:04.590778 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490900-cc5rt"] Jan 26 19:45:05 crc kubenswrapper[4770]: I0126 19:45:05.778053 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6377faf9-1047-4fe9-a2b8-816f0213cde0" path="/var/lib/kubelet/pods/6377faf9-1047-4fe9-a2b8-816f0213cde0/volumes" Jan 26 19:45:16 crc kubenswrapper[4770]: I0126 19:45:16.769039 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:45:16 crc kubenswrapper[4770]: E0126 19:45:16.770111 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:45:21 crc kubenswrapper[4770]: I0126 19:45:21.159863 4770 scope.go:117] "RemoveContainer" containerID="7bc7ac62957dc5d3243f189fa968d4076845aa518ea5c71b04934269bd6f52b6" Jan 26 19:45:27 crc kubenswrapper[4770]: I0126 19:45:27.767351 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:45:27 crc kubenswrapper[4770]: E0126 19:45:27.768488 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:45:36 crc kubenswrapper[4770]: E0126 19:45:36.885772 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:47980->38.102.83.51:41531: write tcp 38.102.83.51:47980->38.102.83.51:41531: write: broken pipe Jan 26 19:45:41 crc kubenswrapper[4770]: I0126 19:45:41.767536 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:45:41 crc kubenswrapper[4770]: E0126 19:45:41.768345 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:45:55 crc kubenswrapper[4770]: I0126 19:45:55.782508 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:45:55 crc kubenswrapper[4770]: E0126 19:45:55.784006 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:46:08 crc kubenswrapper[4770]: I0126 19:46:08.768718 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:46:08 crc kubenswrapper[4770]: E0126 19:46:08.769784 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:46:21 crc kubenswrapper[4770]: I0126 19:46:21.767933 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:46:21 crc kubenswrapper[4770]: E0126 19:46:21.769373 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:46:35 crc kubenswrapper[4770]: I0126 19:46:35.786541 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:46:35 crc kubenswrapper[4770]: E0126 19:46:35.789794 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:46:47 crc kubenswrapper[4770]: I0126 19:46:47.767616 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:46:47 crc kubenswrapper[4770]: E0126 19:46:47.768374 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:47:00 crc kubenswrapper[4770]: I0126 19:47:00.767403 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:47:00 crc kubenswrapper[4770]: E0126 19:47:00.768394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:47:14 crc kubenswrapper[4770]: I0126 19:47:14.766930 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:47:14 crc kubenswrapper[4770]: E0126 19:47:14.767810 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:47:27 crc kubenswrapper[4770]: I0126 19:47:27.767318 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:47:27 crc kubenswrapper[4770]: E0126 19:47:27.768476 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:47:38 crc kubenswrapper[4770]: I0126 19:47:38.768105 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:47:39 crc kubenswrapper[4770]: I0126 19:47:39.486960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316"} Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.790344 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:08 crc kubenswrapper[4770]: E0126 19:49:08.801285 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" containerName="collect-profiles" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.801308 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" containerName="collect-profiles" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.801552 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" containerName="collect-profiles" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.806025 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.824790 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.872491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.872648 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.872952 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz4n\" (UniqueName: \"kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.975451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz4n\" (UniqueName: \"kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.975547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.975656 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.976040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:08 crc kubenswrapper[4770]: I0126 19:49:08.976052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:09 crc kubenswrapper[4770]: I0126 19:49:09.001646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz4n\" (UniqueName: \"kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n\") pod \"redhat-operators-stmc5\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:09 crc kubenswrapper[4770]: I0126 19:49:09.137096 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:09 crc kubenswrapper[4770]: I0126 19:49:09.661459 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:10 crc kubenswrapper[4770]: I0126 19:49:10.520319 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerID="3c987aebcdb91fd032259daf7160a222960ae8ea98e8cf565d549cab47b979a8" exitCode=0 Jan 26 19:49:10 crc kubenswrapper[4770]: I0126 19:49:10.520378 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerDied","Data":"3c987aebcdb91fd032259daf7160a222960ae8ea98e8cf565d549cab47b979a8"} Jan 26 19:49:10 crc kubenswrapper[4770]: I0126 19:49:10.520760 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerStarted","Data":"cfa965235e7d7a1fb0457d41ec55161b6880f8d056789a7bf15e8fdcd1fa17fb"} Jan 26 19:49:10 crc kubenswrapper[4770]: I0126 19:49:10.523287 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:49:12 crc kubenswrapper[4770]: I0126 19:49:12.571288 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerStarted","Data":"154cbd4d6326b41123bde5b8610c9c1ed8f2b0df32b70e789b8b86178e4b24ca"} Jan 26 19:49:14 crc kubenswrapper[4770]: I0126 19:49:14.598365 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerID="154cbd4d6326b41123bde5b8610c9c1ed8f2b0df32b70e789b8b86178e4b24ca" exitCode=0 Jan 26 19:49:14 crc kubenswrapper[4770]: I0126 19:49:14.598452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerDied","Data":"154cbd4d6326b41123bde5b8610c9c1ed8f2b0df32b70e789b8b86178e4b24ca"} Jan 26 19:49:15 crc kubenswrapper[4770]: I0126 19:49:15.611857 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerStarted","Data":"205964b7468e37f0a50915ad6e245e2a2ac287b700866629f7cf2f0004f8a9c2"} Jan 26 19:49:15 crc kubenswrapper[4770]: I0126 19:49:15.663365 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stmc5" podStartSLOduration=3.163978632 podStartE2EDuration="7.663343835s" podCreationTimestamp="2026-01-26 19:49:08 +0000 UTC" firstStartedPulling="2026-01-26 19:49:10.523076737 +0000 UTC m=+4035.087983469" lastFinishedPulling="2026-01-26 19:49:15.02244189 +0000 UTC m=+4039.587348672" observedRunningTime="2026-01-26 19:49:15.654510257 +0000 UTC m=+4040.219416989" watchObservedRunningTime="2026-01-26 19:49:15.663343835 +0000 UTC m=+4040.228250587" Jan 26 19:49:19 crc kubenswrapper[4770]: I0126 19:49:19.137238 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:19 crc kubenswrapper[4770]: I0126 19:49:19.137912 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:20 crc kubenswrapper[4770]: I0126 19:49:20.205972 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-stmc5" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="registry-server" probeResult="failure" output=< Jan 26 19:49:20 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 19:49:20 crc kubenswrapper[4770]: > Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.868864 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.873191 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.910021 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.995096 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsj7p\" (UniqueName: \"kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.995538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:21 crc kubenswrapper[4770]: I0126 19:49:21.995689 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.097845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.097985 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsj7p\" (UniqueName: \"kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.098139 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.098334 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.098660 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.120527 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsj7p\" (UniqueName: \"kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p\") pod \"certified-operators-mtdnp\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.213170 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:22 crc kubenswrapper[4770]: I0126 19:49:22.775009 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:23 crc kubenswrapper[4770]: I0126 19:49:23.705977 4770 generic.go:334] "Generic (PLEG): container finished" podID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerID="0d353b1980a546c3a7a2c1ba9ad7f74e758ca0f8367ef02aa5e25842c88e657f" exitCode=0 Jan 26 19:49:23 crc kubenswrapper[4770]: I0126 19:49:23.706103 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerDied","Data":"0d353b1980a546c3a7a2c1ba9ad7f74e758ca0f8367ef02aa5e25842c88e657f"} Jan 26 19:49:23 crc kubenswrapper[4770]: I0126 19:49:23.706269 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerStarted","Data":"0b28b4ec7005083060af95c7818e151d1ecb4b7ad9dcbaba6b1edec5f19837a8"} Jan 26 19:49:24 crc kubenswrapper[4770]: I0126 19:49:24.723465 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerStarted","Data":"7c90cbbc68c24d9c7ff9bebce3177dc673547350896e88631da030f78e66cbfc"} Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.076354 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.079500 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.090475 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.186566 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.187179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.187379 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lnz\" (UniqueName: \"kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.291249 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.292152 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lnz\" (UniqueName: \"kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.292526 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.293009 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.293304 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.323963 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lnz\" (UniqueName: \"kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz\") pod \"redhat-marketplace-dk5fj\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.411238 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:25 crc kubenswrapper[4770]: I0126 19:49:25.977187 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:26 crc kubenswrapper[4770]: I0126 19:49:26.747007 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerStarted","Data":"abc5f747107ae3a3d92e07cd7d36293f8c7ba710df8b79fa9ed723779fe6d645"} Jan 26 19:49:27 crc kubenswrapper[4770]: I0126 19:49:27.762430 4770 generic.go:334] "Generic (PLEG): container finished" podID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerID="7c90cbbc68c24d9c7ff9bebce3177dc673547350896e88631da030f78e66cbfc" exitCode=0 Jan 26 19:49:27 crc kubenswrapper[4770]: I0126 19:49:27.762539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerDied","Data":"7c90cbbc68c24d9c7ff9bebce3177dc673547350896e88631da030f78e66cbfc"} Jan 26 19:49:27 crc kubenswrapper[4770]: I0126 19:49:27.766923 4770 generic.go:334] "Generic (PLEG): container finished" podID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerID="91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723" exitCode=0 Jan 26 19:49:27 crc kubenswrapper[4770]: I0126 19:49:27.809451 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerDied","Data":"91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723"} Jan 26 19:49:28 crc kubenswrapper[4770]: I0126 19:49:28.776949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerStarted","Data":"738bf0692251e2fef512d7d8007702dc6c4a91fa3aa24624bcc37a47c4c3bae5"} Jan 26 19:49:28 crc kubenswrapper[4770]: I0126 19:49:28.779841 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerStarted","Data":"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579"} Jan 26 19:49:28 crc kubenswrapper[4770]: I0126 19:49:28.804642 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtdnp" podStartSLOduration=3.345110568 podStartE2EDuration="7.804622689s" podCreationTimestamp="2026-01-26 19:49:21 +0000 UTC" firstStartedPulling="2026-01-26 19:49:23.707990996 +0000 UTC m=+4048.272897748" lastFinishedPulling="2026-01-26 19:49:28.167503127 +0000 UTC m=+4052.732409869" observedRunningTime="2026-01-26 19:49:28.795293347 +0000 UTC m=+4053.360200089" watchObservedRunningTime="2026-01-26 19:49:28.804622689 +0000 UTC m=+4053.369529431" Jan 26 19:49:29 crc kubenswrapper[4770]: I0126 19:49:29.210900 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:29 crc kubenswrapper[4770]: I0126 19:49:29.285802 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:29 crc kubenswrapper[4770]: I0126 19:49:29.791455 4770 generic.go:334] "Generic (PLEG): container finished" podID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerID="55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579" exitCode=0 Jan 26 19:49:29 crc kubenswrapper[4770]: I0126 19:49:29.793113 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerDied","Data":"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579"} Jan 26 19:49:31 crc kubenswrapper[4770]: I0126 19:49:31.816449 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerStarted","Data":"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4"} Jan 26 19:49:31 crc kubenswrapper[4770]: I0126 19:49:31.842473 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dk5fj" podStartSLOduration=4.385397861 podStartE2EDuration="6.842455003s" podCreationTimestamp="2026-01-26 19:49:25 +0000 UTC" firstStartedPulling="2026-01-26 19:49:27.768402667 +0000 UTC m=+4052.333309399" lastFinishedPulling="2026-01-26 19:49:30.225459799 +0000 UTC m=+4054.790366541" observedRunningTime="2026-01-26 19:49:31.840589093 +0000 UTC m=+4056.405495825" watchObservedRunningTime="2026-01-26 19:49:31.842455003 +0000 UTC m=+4056.407361755" Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.061960 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.062219 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stmc5" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="registry-server" containerID="cri-o://205964b7468e37f0a50915ad6e245e2a2ac287b700866629f7cf2f0004f8a9c2" gracePeriod=2 Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.213837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.213886 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.264275 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.834602 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerID="205964b7468e37f0a50915ad6e245e2a2ac287b700866629f7cf2f0004f8a9c2" exitCode=0 Jan 26 19:49:32 crc kubenswrapper[4770]: I0126 19:49:32.834654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerDied","Data":"205964b7468e37f0a50915ad6e245e2a2ac287b700866629f7cf2f0004f8a9c2"} Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.138568 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.270784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities\") pod \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.270990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content\") pod \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.271103 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zz4n\" (UniqueName: \"kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n\") pod \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\" (UID: \"1b053a99-23e1-4fb0-be06-e7ea2369bbad\") " Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.271642 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities" (OuterVolumeSpecName: "utilities") pod "1b053a99-23e1-4fb0-be06-e7ea2369bbad" (UID: "1b053a99-23e1-4fb0-be06-e7ea2369bbad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.374167 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.376218 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n" (OuterVolumeSpecName: "kube-api-access-6zz4n") pod "1b053a99-23e1-4fb0-be06-e7ea2369bbad" (UID: "1b053a99-23e1-4fb0-be06-e7ea2369bbad"). InnerVolumeSpecName "kube-api-access-6zz4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.389821 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b053a99-23e1-4fb0-be06-e7ea2369bbad" (UID: "1b053a99-23e1-4fb0-be06-e7ea2369bbad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.476193 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b053a99-23e1-4fb0-be06-e7ea2369bbad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.476578 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zz4n\" (UniqueName: \"kubernetes.io/projected/1b053a99-23e1-4fb0-be06-e7ea2369bbad-kube-api-access-6zz4n\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.855987 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmc5" event={"ID":"1b053a99-23e1-4fb0-be06-e7ea2369bbad","Type":"ContainerDied","Data":"cfa965235e7d7a1fb0457d41ec55161b6880f8d056789a7bf15e8fdcd1fa17fb"} Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.856075 4770 scope.go:117] "RemoveContainer" containerID="205964b7468e37f0a50915ad6e245e2a2ac287b700866629f7cf2f0004f8a9c2" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.856112 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmc5" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.889022 4770 scope.go:117] "RemoveContainer" containerID="154cbd4d6326b41123bde5b8610c9c1ed8f2b0df32b70e789b8b86178e4b24ca" Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.905356 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.918755 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stmc5"] Jan 26 19:49:33 crc kubenswrapper[4770]: I0126 19:49:33.930654 4770 scope.go:117] "RemoveContainer" containerID="3c987aebcdb91fd032259daf7160a222960ae8ea98e8cf565d549cab47b979a8" Jan 26 19:49:35 crc kubenswrapper[4770]: I0126 19:49:35.412182 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:35 crc kubenswrapper[4770]: I0126 19:49:35.412553 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:35 crc kubenswrapper[4770]: I0126 19:49:35.487134 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:35 crc kubenswrapper[4770]: I0126 19:49:35.781583 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" path="/var/lib/kubelet/pods/1b053a99-23e1-4fb0-be06-e7ea2369bbad/volumes" Jan 26 19:49:42 crc kubenswrapper[4770]: I0126 19:49:42.297224 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:45 crc kubenswrapper[4770]: I0126 19:49:45.483226 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:45 crc kubenswrapper[4770]: I0126 19:49:45.864747 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:45 crc kubenswrapper[4770]: I0126 19:49:45.865094 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtdnp" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="registry-server" containerID="cri-o://738bf0692251e2fef512d7d8007702dc6c4a91fa3aa24624bcc37a47c4c3bae5" gracePeriod=2 Jan 26 19:49:46 crc kubenswrapper[4770]: I0126 19:49:46.018304 4770 generic.go:334] "Generic (PLEG): container finished" podID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerID="738bf0692251e2fef512d7d8007702dc6c4a91fa3aa24624bcc37a47c4c3bae5" exitCode=0 Jan 26 19:49:46 crc kubenswrapper[4770]: I0126 19:49:46.018365 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerDied","Data":"738bf0692251e2fef512d7d8007702dc6c4a91fa3aa24624bcc37a47c4c3bae5"} Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.029008 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtdnp" event={"ID":"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71","Type":"ContainerDied","Data":"0b28b4ec7005083060af95c7818e151d1ecb4b7ad9dcbaba6b1edec5f19837a8"} Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.029643 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b28b4ec7005083060af95c7818e151d1ecb4b7ad9dcbaba6b1edec5f19837a8" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.034530 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.150858 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities\") pod \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.150945 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsj7p\" (UniqueName: \"kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p\") pod \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.150994 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content\") pod \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\" (UID: \"7ad0e7b4-81e6-4427-8f8c-07e6a3755f71\") " Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.152193 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities" (OuterVolumeSpecName: "utilities") pod "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" (UID: "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.158841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p" (OuterVolumeSpecName: "kube-api-access-bsj7p") pod "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" (UID: "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71"). InnerVolumeSpecName "kube-api-access-bsj7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.194317 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" (UID: "7ad0e7b4-81e6-4427-8f8c-07e6a3755f71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.254074 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.254117 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsj7p\" (UniqueName: \"kubernetes.io/projected/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-kube-api-access-bsj7p\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:47 crc kubenswrapper[4770]: I0126 19:49:47.254128 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:48 crc kubenswrapper[4770]: I0126 19:49:48.036501 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtdnp" Jan 26 19:49:48 crc kubenswrapper[4770]: I0126 19:49:48.061177 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:48 crc kubenswrapper[4770]: I0126 19:49:48.070058 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtdnp"] Jan 26 19:49:49 crc kubenswrapper[4770]: I0126 19:49:49.787194 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" path="/var/lib/kubelet/pods/7ad0e7b4-81e6-4427-8f8c-07e6a3755f71/volumes" Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.258488 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.258804 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dk5fj" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="registry-server" containerID="cri-o://3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4" gracePeriod=2 Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.792622 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.938875 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities\") pod \"356f5676-d763-4f18-9caa-66d8e65ad12b\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.939226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9lnz\" (UniqueName: \"kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz\") pod \"356f5676-d763-4f18-9caa-66d8e65ad12b\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.939299 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content\") pod \"356f5676-d763-4f18-9caa-66d8e65ad12b\" (UID: \"356f5676-d763-4f18-9caa-66d8e65ad12b\") " Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.939988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities" (OuterVolumeSpecName: "utilities") pod "356f5676-d763-4f18-9caa-66d8e65ad12b" (UID: "356f5676-d763-4f18-9caa-66d8e65ad12b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.940842 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.947986 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz" (OuterVolumeSpecName: "kube-api-access-s9lnz") pod "356f5676-d763-4f18-9caa-66d8e65ad12b" (UID: "356f5676-d763-4f18-9caa-66d8e65ad12b"). InnerVolumeSpecName "kube-api-access-s9lnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:49:50 crc kubenswrapper[4770]: I0126 19:49:50.964111 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "356f5676-d763-4f18-9caa-66d8e65ad12b" (UID: "356f5676-d763-4f18-9caa-66d8e65ad12b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.042378 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9lnz\" (UniqueName: \"kubernetes.io/projected/356f5676-d763-4f18-9caa-66d8e65ad12b-kube-api-access-s9lnz\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.042412 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/356f5676-d763-4f18-9caa-66d8e65ad12b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.063652 4770 generic.go:334] "Generic (PLEG): container finished" podID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerID="3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4" exitCode=0 Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.063710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerDied","Data":"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4"} Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.063738 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk5fj" event={"ID":"356f5676-d763-4f18-9caa-66d8e65ad12b","Type":"ContainerDied","Data":"abc5f747107ae3a3d92e07cd7d36293f8c7ba710df8b79fa9ed723779fe6d645"} Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.063755 4770 scope.go:117] "RemoveContainer" containerID="3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.063893 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk5fj" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.114856 4770 scope.go:117] "RemoveContainer" containerID="55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.120631 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.129553 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk5fj"] Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.136646 4770 scope.go:117] "RemoveContainer" containerID="91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.199530 4770 scope.go:117] "RemoveContainer" containerID="3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4" Jan 26 19:49:51 crc kubenswrapper[4770]: E0126 19:49:51.200002 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4\": container with ID starting with 3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4 not found: ID does not exist" containerID="3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.200035 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4"} err="failed to get container status \"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4\": rpc error: code = NotFound desc = could not find container \"3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4\": container with ID starting with 3c9ce2c65b871a62c6f87881e03ca3598e209e664549d134b1ee6a859740ecd4 not found: ID does not exist" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.200057 4770 scope.go:117] "RemoveContainer" containerID="55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579" Jan 26 19:49:51 crc kubenswrapper[4770]: E0126 19:49:51.200411 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579\": container with ID starting with 55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579 not found: ID does not exist" containerID="55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.200431 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579"} err="failed to get container status \"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579\": rpc error: code = NotFound desc = could not find container \"55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579\": container with ID starting with 55e7f831b14e2e29c53ba901463b521b87c867e6dd7ef63a01741e1be5cf3579 not found: ID does not exist" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.200444 4770 scope.go:117] "RemoveContainer" containerID="91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723" Jan 26 19:49:51 crc kubenswrapper[4770]: E0126 19:49:51.200800 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723\": container with ID starting with 91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723 not found: ID does not exist" containerID="91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.201014 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723"} err="failed to get container status \"91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723\": rpc error: code = NotFound desc = could not find container \"91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723\": container with ID starting with 91f23d192f7cb290138d3508a9637cc4205a870978cde7a95c39d189aae2c723 not found: ID does not exist" Jan 26 19:49:51 crc kubenswrapper[4770]: I0126 19:49:51.781920 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" path="/var/lib/kubelet/pods/356f5676-d763-4f18-9caa-66d8e65ad12b/volumes" Jan 26 19:50:00 crc kubenswrapper[4770]: I0126 19:50:00.330324 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:50:00 crc kubenswrapper[4770]: I0126 19:50:00.330901 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:50:30 crc kubenswrapper[4770]: I0126 19:50:30.331187 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:50:30 crc kubenswrapper[4770]: I0126 19:50:30.331827 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.331026 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.332731 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.332823 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.333991 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.334085 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316" gracePeriod=600 Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.903587 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316" exitCode=0 Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.903746 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316"} Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.904297 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a"} Jan 26 19:51:00 crc kubenswrapper[4770]: I0126 19:51:00.904327 4770 scope.go:117] "RemoveContainer" containerID="28bbf4041990f36a4e8ff0ddca9ed5868ad2a259c97f5a59263cadfed10d7105" Jan 26 19:51:43 crc kubenswrapper[4770]: I0126 19:51:43.603613 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5c5fff9c7-vsc8j" podUID="061a1ade-3e2c-4fa3-af1d-79119e42b777" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 19:53:00 crc kubenswrapper[4770]: I0126 19:53:00.331155 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:53:00 crc kubenswrapper[4770]: I0126 19:53:00.331858 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:53:30 crc kubenswrapper[4770]: I0126 19:53:30.330741 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:53:30 crc kubenswrapper[4770]: I0126 19:53:30.331295 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.330968 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.331655 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.331719 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.332282 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.332349 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" gracePeriod=600 Jan 26 19:54:00 crc kubenswrapper[4770]: E0126 19:54:00.478784 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.937990 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" exitCode=0 Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.938064 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a"} Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.938112 4770 scope.go:117] "RemoveContainer" containerID="5dc5c986d0afa24399d6378ca954d1710fd86f54578b1a65a78c6395457bb316" Jan 26 19:54:00 crc kubenswrapper[4770]: I0126 19:54:00.938976 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:54:00 crc kubenswrapper[4770]: E0126 19:54:00.939364 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:54:14 crc kubenswrapper[4770]: I0126 19:54:14.767910 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:54:14 crc kubenswrapper[4770]: E0126 19:54:14.769037 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:54:27 crc kubenswrapper[4770]: I0126 19:54:27.768418 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:54:27 crc kubenswrapper[4770]: E0126 19:54:27.769499 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:54:41 crc kubenswrapper[4770]: I0126 19:54:41.767303 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:54:41 crc kubenswrapper[4770]: E0126 19:54:41.768379 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:54:55 crc kubenswrapper[4770]: I0126 19:54:55.785077 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:54:55 crc kubenswrapper[4770]: E0126 19:54:55.785893 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:55:10 crc kubenswrapper[4770]: I0126 19:55:10.767580 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:55:10 crc kubenswrapper[4770]: E0126 19:55:10.768405 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:55:21 crc kubenswrapper[4770]: I0126 19:55:21.767787 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:55:21 crc kubenswrapper[4770]: E0126 19:55:21.768728 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.891491 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5r25b"] Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892199 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892441 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892455 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892462 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892478 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892484 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892496 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892502 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892510 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892515 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892529 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892536 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892559 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892566 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="extract-utilities" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892573 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892579 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: E0126 19:55:24.892588 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892593 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="extract-content" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892789 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad0e7b4-81e6-4427-8f8c-07e6a3755f71" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892805 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="356f5676-d763-4f18-9caa-66d8e65ad12b" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.892818 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b053a99-23e1-4fb0-be06-e7ea2369bbad" containerName="registry-server" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.894220 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.913941 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5r25b"] Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.994176 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-catalog-content\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.994359 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkf4n\" (UniqueName: \"kubernetes.io/projected/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-kube-api-access-kkf4n\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:24 crc kubenswrapper[4770]: I0126 19:55:24.994397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-utilities\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.096550 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-catalog-content\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.096736 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkf4n\" (UniqueName: \"kubernetes.io/projected/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-kube-api-access-kkf4n\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.096783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-utilities\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.097069 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-catalog-content\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.097275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-utilities\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.124156 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkf4n\" (UniqueName: \"kubernetes.io/projected/50a3775b-da9d-4e62-9695-6e7e0c6ac3cc-kube-api-access-kkf4n\") pod \"community-operators-5r25b\" (UID: \"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc\") " pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.222108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.745912 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5r25b"] Jan 26 19:55:25 crc kubenswrapper[4770]: I0126 19:55:25.996346 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5r25b" event={"ID":"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc","Type":"ContainerStarted","Data":"bd83ee2d3656f03af8fc82f61e03bad0a4d86ef7c738e2fa82bfd577a0a40db7"} Jan 26 19:55:27 crc kubenswrapper[4770]: I0126 19:55:27.014421 4770 generic.go:334] "Generic (PLEG): container finished" podID="50a3775b-da9d-4e62-9695-6e7e0c6ac3cc" containerID="33dbcc48fb1fec185264e67e3c53f15ec04aebc3c12a0fa506366758260fbb2e" exitCode=0 Jan 26 19:55:27 crc kubenswrapper[4770]: I0126 19:55:27.014531 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5r25b" event={"ID":"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc","Type":"ContainerDied","Data":"33dbcc48fb1fec185264e67e3c53f15ec04aebc3c12a0fa506366758260fbb2e"} Jan 26 19:55:27 crc kubenswrapper[4770]: I0126 19:55:27.018245 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 19:55:32 crc kubenswrapper[4770]: I0126 19:55:32.085760 4770 generic.go:334] "Generic (PLEG): container finished" podID="50a3775b-da9d-4e62-9695-6e7e0c6ac3cc" containerID="0b6b51138e4462f56e2f7c3f5d258b28262d6e133dfd552a3199ba05dc3f9ffe" exitCode=0 Jan 26 19:55:32 crc kubenswrapper[4770]: I0126 19:55:32.085882 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5r25b" event={"ID":"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc","Type":"ContainerDied","Data":"0b6b51138e4462f56e2f7c3f5d258b28262d6e133dfd552a3199ba05dc3f9ffe"} Jan 26 19:55:33 crc kubenswrapper[4770]: I0126 19:55:33.101210 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5r25b" event={"ID":"50a3775b-da9d-4e62-9695-6e7e0c6ac3cc","Type":"ContainerStarted","Data":"cf5617a86f65bd7e6e1f3edc5b0718309bce4caeb0e5d2247f7af1ae4b74ee7f"} Jan 26 19:55:33 crc kubenswrapper[4770]: I0126 19:55:33.766821 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:55:33 crc kubenswrapper[4770]: E0126 19:55:33.767299 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:55:34 crc kubenswrapper[4770]: I0126 19:55:34.162658 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5r25b" podStartSLOduration=4.538218089 podStartE2EDuration="10.162628422s" podCreationTimestamp="2026-01-26 19:55:24 +0000 UTC" firstStartedPulling="2026-01-26 19:55:27.017847033 +0000 UTC m=+4411.582753805" lastFinishedPulling="2026-01-26 19:55:32.642257366 +0000 UTC m=+4417.207164138" observedRunningTime="2026-01-26 19:55:34.140516086 +0000 UTC m=+4418.705422878" watchObservedRunningTime="2026-01-26 19:55:34.162628422 +0000 UTC m=+4418.727535194" Jan 26 19:55:35 crc kubenswrapper[4770]: I0126 19:55:35.222266 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:35 crc kubenswrapper[4770]: I0126 19:55:35.222682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:35 crc kubenswrapper[4770]: I0126 19:55:35.300742 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:44 crc kubenswrapper[4770]: I0126 19:55:44.767308 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:55:44 crc kubenswrapper[4770]: E0126 19:55:44.768125 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:55:45 crc kubenswrapper[4770]: I0126 19:55:45.318938 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5r25b" Jan 26 19:55:45 crc kubenswrapper[4770]: I0126 19:55:45.432219 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5r25b"] Jan 26 19:55:45 crc kubenswrapper[4770]: I0126 19:55:45.495013 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 19:55:45 crc kubenswrapper[4770]: I0126 19:55:45.495505 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5mzq8" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="registry-server" containerID="cri-o://5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f" gracePeriod=2 Jan 26 19:55:45 crc kubenswrapper[4770]: I0126 19:55:45.945319 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.059369 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content\") pod \"5265896d-8227-4910-a158-d447ed4139f4\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.059438 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chpsh\" (UniqueName: \"kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh\") pod \"5265896d-8227-4910-a158-d447ed4139f4\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.059510 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities\") pod \"5265896d-8227-4910-a158-d447ed4139f4\" (UID: \"5265896d-8227-4910-a158-d447ed4139f4\") " Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.061375 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities" (OuterVolumeSpecName: "utilities") pod "5265896d-8227-4910-a158-d447ed4139f4" (UID: "5265896d-8227-4910-a158-d447ed4139f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.078932 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh" (OuterVolumeSpecName: "kube-api-access-chpsh") pod "5265896d-8227-4910-a158-d447ed4139f4" (UID: "5265896d-8227-4910-a158-d447ed4139f4"). InnerVolumeSpecName "kube-api-access-chpsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.123611 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5265896d-8227-4910-a158-d447ed4139f4" (UID: "5265896d-8227-4910-a158-d447ed4139f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.162189 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.162233 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chpsh\" (UniqueName: \"kubernetes.io/projected/5265896d-8227-4910-a158-d447ed4139f4-kube-api-access-chpsh\") on node \"crc\" DevicePath \"\"" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.162246 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5265896d-8227-4910-a158-d447ed4139f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.242330 4770 generic.go:334] "Generic (PLEG): container finished" podID="5265896d-8227-4910-a158-d447ed4139f4" containerID="5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f" exitCode=0 Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.242396 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mzq8" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.242385 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerDied","Data":"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f"} Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.242560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mzq8" event={"ID":"5265896d-8227-4910-a158-d447ed4139f4","Type":"ContainerDied","Data":"d72469ddc55e7a5535edbaf00e1311c6277ff6b790df80f61fcf8c644969be39"} Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.242588 4770 scope.go:117] "RemoveContainer" containerID="5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.271751 4770 scope.go:117] "RemoveContainer" containerID="863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.273176 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.282116 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5mzq8"] Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.304362 4770 scope.go:117] "RemoveContainer" containerID="edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.342436 4770 scope.go:117] "RemoveContainer" containerID="5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f" Jan 26 19:55:46 crc kubenswrapper[4770]: E0126 19:55:46.342848 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f\": container with ID starting with 5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f not found: ID does not exist" containerID="5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.342880 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f"} err="failed to get container status \"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f\": rpc error: code = NotFound desc = could not find container \"5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f\": container with ID starting with 5f5ae3f3da43a2d5a18a395f5076875884036154b15a179a4a7dde4b5086e14f not found: ID does not exist" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.342902 4770 scope.go:117] "RemoveContainer" containerID="863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3" Jan 26 19:55:46 crc kubenswrapper[4770]: E0126 19:55:46.343229 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3\": container with ID starting with 863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3 not found: ID does not exist" containerID="863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.343278 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3"} err="failed to get container status \"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3\": rpc error: code = NotFound desc = could not find container \"863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3\": container with ID starting with 863ad9794d64afa5075432c9b11a07b9dc9446a7851fb0590a1906c118de29d3 not found: ID does not exist" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.343306 4770 scope.go:117] "RemoveContainer" containerID="edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712" Jan 26 19:55:46 crc kubenswrapper[4770]: E0126 19:55:46.343538 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712\": container with ID starting with edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712 not found: ID does not exist" containerID="edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712" Jan 26 19:55:46 crc kubenswrapper[4770]: I0126 19:55:46.343568 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712"} err="failed to get container status \"edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712\": rpc error: code = NotFound desc = could not find container \"edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712\": container with ID starting with edf29914cd0632cb13055a22e05a558e6d6b1a41bb004367794a7480bf45a712 not found: ID does not exist" Jan 26 19:55:47 crc kubenswrapper[4770]: I0126 19:55:47.787958 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5265896d-8227-4910-a158-d447ed4139f4" path="/var/lib/kubelet/pods/5265896d-8227-4910-a158-d447ed4139f4/volumes" Jan 26 19:55:59 crc kubenswrapper[4770]: I0126 19:55:59.780913 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:55:59 crc kubenswrapper[4770]: E0126 19:55:59.784428 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:56:11 crc kubenswrapper[4770]: I0126 19:56:11.767446 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:56:11 crc kubenswrapper[4770]: E0126 19:56:11.768689 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:56:21 crc kubenswrapper[4770]: I0126 19:56:21.481873 4770 scope.go:117] "RemoveContainer" containerID="738bf0692251e2fef512d7d8007702dc6c4a91fa3aa24624bcc37a47c4c3bae5" Jan 26 19:56:21 crc kubenswrapper[4770]: I0126 19:56:21.510470 4770 scope.go:117] "RemoveContainer" containerID="0d353b1980a546c3a7a2c1ba9ad7f74e758ca0f8367ef02aa5e25842c88e657f" Jan 26 19:56:21 crc kubenswrapper[4770]: I0126 19:56:21.538761 4770 scope.go:117] "RemoveContainer" containerID="7c90cbbc68c24d9c7ff9bebce3177dc673547350896e88631da030f78e66cbfc" Jan 26 19:56:26 crc kubenswrapper[4770]: I0126 19:56:26.768060 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:56:26 crc kubenswrapper[4770]: E0126 19:56:26.769639 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:56:37 crc kubenswrapper[4770]: I0126 19:56:37.767599 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:56:37 crc kubenswrapper[4770]: E0126 19:56:37.768940 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:56:48 crc kubenswrapper[4770]: I0126 19:56:48.767407 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:56:48 crc kubenswrapper[4770]: E0126 19:56:48.768172 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:56:59 crc kubenswrapper[4770]: I0126 19:56:59.767761 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:56:59 crc kubenswrapper[4770]: E0126 19:56:59.768841 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:57:13 crc kubenswrapper[4770]: I0126 19:57:13.767704 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:57:13 crc kubenswrapper[4770]: E0126 19:57:13.768825 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:57:26 crc kubenswrapper[4770]: I0126 19:57:26.767098 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:57:26 crc kubenswrapper[4770]: E0126 19:57:26.768104 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:57:37 crc kubenswrapper[4770]: I0126 19:57:37.767976 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:57:37 crc kubenswrapper[4770]: E0126 19:57:37.768710 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:57:48 crc kubenswrapper[4770]: I0126 19:57:48.767436 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:57:48 crc kubenswrapper[4770]: E0126 19:57:48.768652 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:57:59 crc kubenswrapper[4770]: I0126 19:57:59.768094 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:57:59 crc kubenswrapper[4770]: E0126 19:57:59.769162 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:58:09 crc kubenswrapper[4770]: E0126 19:58:09.395672 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:43702->38.102.83.51:41531: write tcp 38.102.83.51:43702->38.102.83.51:41531: write: broken pipe Jan 26 19:58:12 crc kubenswrapper[4770]: I0126 19:58:12.769833 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:58:12 crc kubenswrapper[4770]: E0126 19:58:12.770828 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:58:25 crc kubenswrapper[4770]: I0126 19:58:25.780620 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:58:25 crc kubenswrapper[4770]: E0126 19:58:25.781989 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:58:40 crc kubenswrapper[4770]: I0126 19:58:40.768424 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:58:40 crc kubenswrapper[4770]: E0126 19:58:40.770061 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:58:52 crc kubenswrapper[4770]: I0126 19:58:52.768089 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:58:52 crc kubenswrapper[4770]: E0126 19:58:52.771271 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 19:59:03 crc kubenswrapper[4770]: I0126 19:59:03.767435 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 19:59:04 crc kubenswrapper[4770]: I0126 19:59:04.645818 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e"} Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.199012 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:37 crc kubenswrapper[4770]: E0126 19:59:37.200025 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="extract-content" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.200043 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="extract-content" Jan 26 19:59:37 crc kubenswrapper[4770]: E0126 19:59:37.200065 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="extract-utilities" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.200073 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="extract-utilities" Jan 26 19:59:37 crc kubenswrapper[4770]: E0126 19:59:37.200095 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="registry-server" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.200103 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="registry-server" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.200436 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5265896d-8227-4910-a158-d447ed4139f4" containerName="registry-server" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.202509 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.227462 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.270128 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jvd\" (UniqueName: \"kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.270302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.270372 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.372886 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.373247 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jvd\" (UniqueName: \"kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.373366 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.373402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.373612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.474883 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jvd\" (UniqueName: \"kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd\") pod \"certified-operators-nfbc7\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.537968 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:37 crc kubenswrapper[4770]: I0126 19:59:37.850811 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:37 crc kubenswrapper[4770]: W0126 19:59:37.876174 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e39302b_f5d5_4433_872e_9633053b5328.slice/crio-a4d3bfba18f610e87947718d83c76ea1cbbcedd180430b231b8d4ca92d69f0ef WatchSource:0}: Error finding container a4d3bfba18f610e87947718d83c76ea1cbbcedd180430b231b8d4ca92d69f0ef: Status 404 returned error can't find the container with id a4d3bfba18f610e87947718d83c76ea1cbbcedd180430b231b8d4ca92d69f0ef Jan 26 19:59:38 crc kubenswrapper[4770]: I0126 19:59:38.049945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerStarted","Data":"a4d3bfba18f610e87947718d83c76ea1cbbcedd180430b231b8d4ca92d69f0ef"} Jan 26 19:59:39 crc kubenswrapper[4770]: I0126 19:59:39.065748 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerDied","Data":"b67f0933c1aec241b1077a01e1bc25c67da93aa7f07c39dd7f363a63d928553e"} Jan 26 19:59:39 crc kubenswrapper[4770]: I0126 19:59:39.065679 4770 generic.go:334] "Generic (PLEG): container finished" podID="9e39302b-f5d5-4433-872e-9633053b5328" containerID="b67f0933c1aec241b1077a01e1bc25c67da93aa7f07c39dd7f363a63d928553e" exitCode=0 Jan 26 19:59:40 crc kubenswrapper[4770]: I0126 19:59:40.080680 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerStarted","Data":"955c66a8ac35c99558992384adf55b32bfd27246f0ce65563a75171aef5c1fea"} Jan 26 19:59:41 crc kubenswrapper[4770]: I0126 19:59:41.095519 4770 generic.go:334] "Generic (PLEG): container finished" podID="9e39302b-f5d5-4433-872e-9633053b5328" containerID="955c66a8ac35c99558992384adf55b32bfd27246f0ce65563a75171aef5c1fea" exitCode=0 Jan 26 19:59:41 crc kubenswrapper[4770]: I0126 19:59:41.095606 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerDied","Data":"955c66a8ac35c99558992384adf55b32bfd27246f0ce65563a75171aef5c1fea"} Jan 26 19:59:42 crc kubenswrapper[4770]: I0126 19:59:42.120907 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerStarted","Data":"ba930f393489c8fcb5708030b7a2f8ebb5a811e88f632ac7355b0c0ab092121c"} Jan 26 19:59:42 crc kubenswrapper[4770]: I0126 19:59:42.160152 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nfbc7" podStartSLOduration=2.704368442 podStartE2EDuration="5.160127549s" podCreationTimestamp="2026-01-26 19:59:37 +0000 UTC" firstStartedPulling="2026-01-26 19:59:39.06877287 +0000 UTC m=+4663.633679642" lastFinishedPulling="2026-01-26 19:59:41.524532007 +0000 UTC m=+4666.089438749" observedRunningTime="2026-01-26 19:59:42.146370048 +0000 UTC m=+4666.711276790" watchObservedRunningTime="2026-01-26 19:59:42.160127549 +0000 UTC m=+4666.725034291" Jan 26 19:59:47 crc kubenswrapper[4770]: I0126 19:59:47.539278 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:47 crc kubenswrapper[4770]: I0126 19:59:47.539898 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:47 crc kubenswrapper[4770]: I0126 19:59:47.594197 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:48 crc kubenswrapper[4770]: I0126 19:59:48.309571 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:48 crc kubenswrapper[4770]: I0126 19:59:48.388926 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:50 crc kubenswrapper[4770]: I0126 19:59:50.222588 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nfbc7" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="registry-server" containerID="cri-o://ba930f393489c8fcb5708030b7a2f8ebb5a811e88f632ac7355b0c0ab092121c" gracePeriod=2 Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.236527 4770 generic.go:334] "Generic (PLEG): container finished" podID="9e39302b-f5d5-4433-872e-9633053b5328" containerID="ba930f393489c8fcb5708030b7a2f8ebb5a811e88f632ac7355b0c0ab092121c" exitCode=0 Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.236657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerDied","Data":"ba930f393489c8fcb5708030b7a2f8ebb5a811e88f632ac7355b0c0ab092121c"} Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.461667 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.641336 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content\") pod \"9e39302b-f5d5-4433-872e-9633053b5328\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.641506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55jvd\" (UniqueName: \"kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd\") pod \"9e39302b-f5d5-4433-872e-9633053b5328\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.641632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities\") pod \"9e39302b-f5d5-4433-872e-9633053b5328\" (UID: \"9e39302b-f5d5-4433-872e-9633053b5328\") " Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.644145 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities" (OuterVolumeSpecName: "utilities") pod "9e39302b-f5d5-4433-872e-9633053b5328" (UID: "9e39302b-f5d5-4433-872e-9633053b5328"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.651903 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd" (OuterVolumeSpecName: "kube-api-access-55jvd") pod "9e39302b-f5d5-4433-872e-9633053b5328" (UID: "9e39302b-f5d5-4433-872e-9633053b5328"). InnerVolumeSpecName "kube-api-access-55jvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.715849 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e39302b-f5d5-4433-872e-9633053b5328" (UID: "9e39302b-f5d5-4433-872e-9633053b5328"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.745021 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55jvd\" (UniqueName: \"kubernetes.io/projected/9e39302b-f5d5-4433-872e-9633053b5328-kube-api-access-55jvd\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.745066 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:51 crc kubenswrapper[4770]: I0126 19:59:51.745078 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e39302b-f5d5-4433-872e-9633053b5328-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.252115 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nfbc7" event={"ID":"9e39302b-f5d5-4433-872e-9633053b5328","Type":"ContainerDied","Data":"a4d3bfba18f610e87947718d83c76ea1cbbcedd180430b231b8d4ca92d69f0ef"} Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.252613 4770 scope.go:117] "RemoveContainer" containerID="ba930f393489c8fcb5708030b7a2f8ebb5a811e88f632ac7355b0c0ab092121c" Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.252225 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nfbc7" Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.295596 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.297198 4770 scope.go:117] "RemoveContainer" containerID="955c66a8ac35c99558992384adf55b32bfd27246f0ce65563a75171aef5c1fea" Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.315515 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nfbc7"] Jan 26 19:59:52 crc kubenswrapper[4770]: I0126 19:59:52.329856 4770 scope.go:117] "RemoveContainer" containerID="b67f0933c1aec241b1077a01e1bc25c67da93aa7f07c39dd7f363a63d928553e" Jan 26 19:59:53 crc kubenswrapper[4770]: I0126 19:59:53.779270 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e39302b-f5d5-4433-872e-9633053b5328" path="/var/lib/kubelet/pods/9e39302b-f5d5-4433-872e-9633053b5328/volumes" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.196580 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw"] Jan 26 20:00:00 crc kubenswrapper[4770]: E0126 20:00:00.197565 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.197580 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4770]: E0126 20:00:00.197608 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.197616 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="extract-content" Jan 26 20:00:00 crc kubenswrapper[4770]: E0126 20:00:00.197647 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.197655 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="extract-utilities" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.197928 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e39302b-f5d5-4433-872e-9633053b5328" containerName="registry-server" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.198820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.201120 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.201889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.209036 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw"] Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.358378 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lc5\" (UniqueName: \"kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.358575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.358637 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.460886 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.460945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.461044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lc5\" (UniqueName: \"kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.462585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.490166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.507125 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lc5\" (UniqueName: \"kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5\") pod \"collect-profiles-29490960-7rcjw\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:00 crc kubenswrapper[4770]: I0126 20:00:00.523900 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.028288 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw"] Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.358354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" event={"ID":"2693b80a-67e1-426f-b8d2-4ed53e6247fe","Type":"ContainerStarted","Data":"d6a0495f4db92e618fa853476c23e73603fa7d1bd62a8e974f3cbb68a2737d9e"} Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.359741 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" event={"ID":"2693b80a-67e1-426f-b8d2-4ed53e6247fe","Type":"ContainerStarted","Data":"453aaf9acff8ab22db3d7c27cde0eff74c55ee87ed5729ac30fd1e6940bd1820"} Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.379560 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" podStartSLOduration=1.379545128 podStartE2EDuration="1.379545128s" podCreationTimestamp="2026-01-26 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:00:01.376187478 +0000 UTC m=+4685.941094210" watchObservedRunningTime="2026-01-26 20:00:01.379545128 +0000 UTC m=+4685.944451850" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.498155 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.502672 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.552561 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.686013 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.686398 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.686465 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlqf\" (UniqueName: \"kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.788180 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.788221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwlqf\" (UniqueName: \"kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.788312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.788768 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.788773 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.809807 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwlqf\" (UniqueName: \"kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf\") pod \"redhat-operators-fg8q5\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:01 crc kubenswrapper[4770]: I0126 20:00:01.847659 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:02 crc kubenswrapper[4770]: W0126 20:00:02.362364 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf224f56_f5f4_4165_a68b_2c7c7af2a054.slice/crio-e6be4568f564a48e4a1ca1ebc2e87134ecb1f516d7dffd644c8535d4ef119821 WatchSource:0}: Error finding container e6be4568f564a48e4a1ca1ebc2e87134ecb1f516d7dffd644c8535d4ef119821: Status 404 returned error can't find the container with id e6be4568f564a48e4a1ca1ebc2e87134ecb1f516d7dffd644c8535d4ef119821 Jan 26 20:00:02 crc kubenswrapper[4770]: I0126 20:00:02.368774 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:02 crc kubenswrapper[4770]: I0126 20:00:02.374269 4770 generic.go:334] "Generic (PLEG): container finished" podID="2693b80a-67e1-426f-b8d2-4ed53e6247fe" containerID="d6a0495f4db92e618fa853476c23e73603fa7d1bd62a8e974f3cbb68a2737d9e" exitCode=0 Jan 26 20:00:02 crc kubenswrapper[4770]: I0126 20:00:02.374310 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" event={"ID":"2693b80a-67e1-426f-b8d2-4ed53e6247fe","Type":"ContainerDied","Data":"d6a0495f4db92e618fa853476c23e73603fa7d1bd62a8e974f3cbb68a2737d9e"} Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.385103 4770 generic.go:334] "Generic (PLEG): container finished" podID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerID="a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32" exitCode=0 Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.385166 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerDied","Data":"a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32"} Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.385406 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerStarted","Data":"e6be4568f564a48e4a1ca1ebc2e87134ecb1f516d7dffd644c8535d4ef119821"} Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.780754 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.867730 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume\") pod \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.867867 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume\") pod \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.867970 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lc5\" (UniqueName: \"kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5\") pod \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\" (UID: \"2693b80a-67e1-426f-b8d2-4ed53e6247fe\") " Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.868772 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "2693b80a-67e1-426f-b8d2-4ed53e6247fe" (UID: "2693b80a-67e1-426f-b8d2-4ed53e6247fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.878038 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5" (OuterVolumeSpecName: "kube-api-access-k9lc5") pod "2693b80a-67e1-426f-b8d2-4ed53e6247fe" (UID: "2693b80a-67e1-426f-b8d2-4ed53e6247fe"). InnerVolumeSpecName "kube-api-access-k9lc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.879265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2693b80a-67e1-426f-b8d2-4ed53e6247fe" (UID: "2693b80a-67e1-426f-b8d2-4ed53e6247fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.970302 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lc5\" (UniqueName: \"kubernetes.io/projected/2693b80a-67e1-426f-b8d2-4ed53e6247fe-kube-api-access-k9lc5\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.970347 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2693b80a-67e1-426f-b8d2-4ed53e6247fe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:03 crc kubenswrapper[4770]: I0126 20:00:03.970358 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2693b80a-67e1-426f-b8d2-4ed53e6247fe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:04 crc kubenswrapper[4770]: I0126 20:00:04.398719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" event={"ID":"2693b80a-67e1-426f-b8d2-4ed53e6247fe","Type":"ContainerDied","Data":"453aaf9acff8ab22db3d7c27cde0eff74c55ee87ed5729ac30fd1e6940bd1820"} Jan 26 20:00:04 crc kubenswrapper[4770]: I0126 20:00:04.398977 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453aaf9acff8ab22db3d7c27cde0eff74c55ee87ed5729ac30fd1e6940bd1820" Jan 26 20:00:04 crc kubenswrapper[4770]: I0126 20:00:04.398800 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490960-7rcjw" Jan 26 20:00:04 crc kubenswrapper[4770]: I0126 20:00:04.474219 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd"] Jan 26 20:00:04 crc kubenswrapper[4770]: I0126 20:00:04.485635 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490915-rlvtd"] Jan 26 20:00:05 crc kubenswrapper[4770]: I0126 20:00:05.797061 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c65537-9b89-449e-8f1f-8036841225f2" path="/var/lib/kubelet/pods/77c65537-9b89-449e-8f1f-8036841225f2/volumes" Jan 26 20:00:06 crc kubenswrapper[4770]: I0126 20:00:06.426195 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerStarted","Data":"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7"} Jan 26 20:00:09 crc kubenswrapper[4770]: I0126 20:00:09.465346 4770 generic.go:334] "Generic (PLEG): container finished" podID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerID="b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7" exitCode=0 Jan 26 20:00:09 crc kubenswrapper[4770]: I0126 20:00:09.465545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerDied","Data":"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7"} Jan 26 20:00:10 crc kubenswrapper[4770]: I0126 20:00:10.507692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerStarted","Data":"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31"} Jan 26 20:00:10 crc kubenswrapper[4770]: I0126 20:00:10.538138 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fg8q5" podStartSLOduration=2.9419978650000003 podStartE2EDuration="9.538116879s" podCreationTimestamp="2026-01-26 20:00:01 +0000 UTC" firstStartedPulling="2026-01-26 20:00:03.387205744 +0000 UTC m=+4687.952112516" lastFinishedPulling="2026-01-26 20:00:09.983324788 +0000 UTC m=+4694.548231530" observedRunningTime="2026-01-26 20:00:10.527598985 +0000 UTC m=+4695.092505737" watchObservedRunningTime="2026-01-26 20:00:10.538116879 +0000 UTC m=+4695.103023631" Jan 26 20:00:11 crc kubenswrapper[4770]: I0126 20:00:11.848765 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:11 crc kubenswrapper[4770]: I0126 20:00:11.849025 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:12 crc kubenswrapper[4770]: I0126 20:00:12.899055 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fg8q5" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="registry-server" probeResult="failure" output=< Jan 26 20:00:12 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:00:12 crc kubenswrapper[4770]: > Jan 26 20:00:21 crc kubenswrapper[4770]: I0126 20:00:21.711315 4770 scope.go:117] "RemoveContainer" containerID="ff8fea5932d0f4cbd70dc9f75ce204653a4482359c3e91b21c3ec99cd4968449" Jan 26 20:00:21 crc kubenswrapper[4770]: I0126 20:00:21.922472 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:21 crc kubenswrapper[4770]: I0126 20:00:21.984681 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:22 crc kubenswrapper[4770]: I0126 20:00:22.167890 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:23 crc kubenswrapper[4770]: I0126 20:00:23.669569 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fg8q5" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="registry-server" containerID="cri-o://a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31" gracePeriod=2 Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.115981 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.309177 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities\") pod \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.309554 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwlqf\" (UniqueName: \"kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf\") pod \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.309641 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content\") pod \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\" (UID: \"bf224f56-f5f4-4165-a68b-2c7c7af2a054\") " Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.310250 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities" (OuterVolumeSpecName: "utilities") pod "bf224f56-f5f4-4165-a68b-2c7c7af2a054" (UID: "bf224f56-f5f4-4165-a68b-2c7c7af2a054"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.310500 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.318944 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf" (OuterVolumeSpecName: "kube-api-access-kwlqf") pod "bf224f56-f5f4-4165-a68b-2c7c7af2a054" (UID: "bf224f56-f5f4-4165-a68b-2c7c7af2a054"). InnerVolumeSpecName "kube-api-access-kwlqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.412487 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwlqf\" (UniqueName: \"kubernetes.io/projected/bf224f56-f5f4-4165-a68b-2c7c7af2a054-kube-api-access-kwlqf\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.456593 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf224f56-f5f4-4165-a68b-2c7c7af2a054" (UID: "bf224f56-f5f4-4165-a68b-2c7c7af2a054"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.514710 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf224f56-f5f4-4165-a68b-2c7c7af2a054-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.681967 4770 generic.go:334] "Generic (PLEG): container finished" podID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerID="a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31" exitCode=0 Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.682040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerDied","Data":"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31"} Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.682066 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg8q5" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.682086 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg8q5" event={"ID":"bf224f56-f5f4-4165-a68b-2c7c7af2a054","Type":"ContainerDied","Data":"e6be4568f564a48e4a1ca1ebc2e87134ecb1f516d7dffd644c8535d4ef119821"} Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.682108 4770 scope.go:117] "RemoveContainer" containerID="a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.701844 4770 scope.go:117] "RemoveContainer" containerID="b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.716533 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.726447 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fg8q5"] Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.741036 4770 scope.go:117] "RemoveContainer" containerID="a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.771270 4770 scope.go:117] "RemoveContainer" containerID="a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31" Jan 26 20:00:24 crc kubenswrapper[4770]: E0126 20:00:24.771820 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31\": container with ID starting with a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31 not found: ID does not exist" containerID="a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.771872 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31"} err="failed to get container status \"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31\": rpc error: code = NotFound desc = could not find container \"a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31\": container with ID starting with a9bb6125a389e593f864526881038088cb8f15a607e2be4990f04d38fa108d31 not found: ID does not exist" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.771903 4770 scope.go:117] "RemoveContainer" containerID="b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7" Jan 26 20:00:24 crc kubenswrapper[4770]: E0126 20:00:24.772173 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7\": container with ID starting with b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7 not found: ID does not exist" containerID="b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.772202 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7"} err="failed to get container status \"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7\": rpc error: code = NotFound desc = could not find container \"b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7\": container with ID starting with b1c5ae70e3fdeaf9c6aea7573f4f7c668db85d9007257af75acd62ace5d1d8c7 not found: ID does not exist" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.772222 4770 scope.go:117] "RemoveContainer" containerID="a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32" Jan 26 20:00:24 crc kubenswrapper[4770]: E0126 20:00:24.772455 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32\": container with ID starting with a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32 not found: ID does not exist" containerID="a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32" Jan 26 20:00:24 crc kubenswrapper[4770]: I0126 20:00:24.772484 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32"} err="failed to get container status \"a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32\": rpc error: code = NotFound desc = could not find container \"a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32\": container with ID starting with a990c91d51fca565bf582f582a54c35bd4b6cf9b04e1e626aab35285d4195b32 not found: ID does not exist" Jan 26 20:00:25 crc kubenswrapper[4770]: I0126 20:00:25.784336 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" path="/var/lib/kubelet/pods/bf224f56-f5f4-4165-a68b-2c7c7af2a054/volumes" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.170764 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490961-npqvs"] Jan 26 20:01:00 crc kubenswrapper[4770]: E0126 20:01:00.172336 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="registry-server" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.172365 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="registry-server" Jan 26 20:01:00 crc kubenswrapper[4770]: E0126 20:01:00.172423 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="extract-utilities" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.172439 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="extract-utilities" Jan 26 20:01:00 crc kubenswrapper[4770]: E0126 20:01:00.172463 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2693b80a-67e1-426f-b8d2-4ed53e6247fe" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.172491 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2693b80a-67e1-426f-b8d2-4ed53e6247fe" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4770]: E0126 20:01:00.172526 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="extract-content" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.172538 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="extract-content" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.172952 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2693b80a-67e1-426f-b8d2-4ed53e6247fe" containerName="collect-profiles" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.173000 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf224f56-f5f4-4165-a68b-2c7c7af2a054" containerName="registry-server" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.174162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.196959 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.197037 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.197072 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm52g\" (UniqueName: \"kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.197190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.199167 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-npqvs"] Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.299659 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.299812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.299856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm52g\" (UniqueName: \"kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.300019 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.309118 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.314406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.315546 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.321213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm52g\" (UniqueName: \"kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g\") pod \"keystone-cron-29490961-npqvs\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:00 crc kubenswrapper[4770]: I0126 20:01:00.507610 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:01 crc kubenswrapper[4770]: I0126 20:01:01.034071 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490961-npqvs"] Jan 26 20:01:01 crc kubenswrapper[4770]: I0126 20:01:01.132979 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-npqvs" event={"ID":"f567e5e2-7857-417c-8258-63661d995e06","Type":"ContainerStarted","Data":"0a56d2dd6231b2570307e261e8e3c3d37e117d2ff7a2f358af1b16ecd40c0e64"} Jan 26 20:01:02 crc kubenswrapper[4770]: I0126 20:01:02.142046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-npqvs" event={"ID":"f567e5e2-7857-417c-8258-63661d995e06","Type":"ContainerStarted","Data":"5dd9e521a62f1e090cf07bca1e1a01f5c09db6b941ee71fb964c5060978bd93e"} Jan 26 20:01:02 crc kubenswrapper[4770]: I0126 20:01:02.167421 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490961-npqvs" podStartSLOduration=2.16740031 podStartE2EDuration="2.16740031s" podCreationTimestamp="2026-01-26 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:01:02.160626828 +0000 UTC m=+4746.725533560" watchObservedRunningTime="2026-01-26 20:01:02.16740031 +0000 UTC m=+4746.732307042" Jan 26 20:01:05 crc kubenswrapper[4770]: I0126 20:01:05.170478 4770 generic.go:334] "Generic (PLEG): container finished" podID="f567e5e2-7857-417c-8258-63661d995e06" containerID="5dd9e521a62f1e090cf07bca1e1a01f5c09db6b941ee71fb964c5060978bd93e" exitCode=0 Jan 26 20:01:05 crc kubenswrapper[4770]: I0126 20:01:05.170556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-npqvs" event={"ID":"f567e5e2-7857-417c-8258-63661d995e06","Type":"ContainerDied","Data":"5dd9e521a62f1e090cf07bca1e1a01f5c09db6b941ee71fb964c5060978bd93e"} Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.608392 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.780056 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm52g\" (UniqueName: \"kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g\") pod \"f567e5e2-7857-417c-8258-63661d995e06\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.781140 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data\") pod \"f567e5e2-7857-417c-8258-63661d995e06\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.781211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle\") pod \"f567e5e2-7857-417c-8258-63661d995e06\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.781263 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys\") pod \"f567e5e2-7857-417c-8258-63661d995e06\" (UID: \"f567e5e2-7857-417c-8258-63661d995e06\") " Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.799981 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f567e5e2-7857-417c-8258-63661d995e06" (UID: "f567e5e2-7857-417c-8258-63661d995e06"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.800116 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g" (OuterVolumeSpecName: "kube-api-access-cm52g") pod "f567e5e2-7857-417c-8258-63661d995e06" (UID: "f567e5e2-7857-417c-8258-63661d995e06"). InnerVolumeSpecName "kube-api-access-cm52g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.822802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f567e5e2-7857-417c-8258-63661d995e06" (UID: "f567e5e2-7857-417c-8258-63661d995e06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.856067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data" (OuterVolumeSpecName: "config-data") pod "f567e5e2-7857-417c-8258-63661d995e06" (UID: "f567e5e2-7857-417c-8258-63661d995e06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.883858 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.883890 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.883900 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f567e5e2-7857-417c-8258-63661d995e06-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:06 crc kubenswrapper[4770]: I0126 20:01:06.883909 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm52g\" (UniqueName: \"kubernetes.io/projected/f567e5e2-7857-417c-8258-63661d995e06-kube-api-access-cm52g\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:07 crc kubenswrapper[4770]: I0126 20:01:07.192485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490961-npqvs" event={"ID":"f567e5e2-7857-417c-8258-63661d995e06","Type":"ContainerDied","Data":"0a56d2dd6231b2570307e261e8e3c3d37e117d2ff7a2f358af1b16ecd40c0e64"} Jan 26 20:01:07 crc kubenswrapper[4770]: I0126 20:01:07.192787 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a56d2dd6231b2570307e261e8e3c3d37e117d2ff7a2f358af1b16ecd40c0e64" Jan 26 20:01:07 crc kubenswrapper[4770]: I0126 20:01:07.192676 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490961-npqvs" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.684608 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:17 crc kubenswrapper[4770]: E0126 20:01:17.686221 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f567e5e2-7857-417c-8258-63661d995e06" containerName="keystone-cron" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.686251 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f567e5e2-7857-417c-8258-63661d995e06" containerName="keystone-cron" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.686426 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f567e5e2-7857-417c-8258-63661d995e06" containerName="keystone-cron" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.687874 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.695670 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.744639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.744732 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.745015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r885c\" (UniqueName: \"kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.847335 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.847413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.847589 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r885c\" (UniqueName: \"kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.847965 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.848026 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:17 crc kubenswrapper[4770]: I0126 20:01:17.867996 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r885c\" (UniqueName: \"kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c\") pod \"redhat-marketplace-2ldzz\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:18 crc kubenswrapper[4770]: I0126 20:01:18.002992 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:18 crc kubenswrapper[4770]: I0126 20:01:18.460119 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:19 crc kubenswrapper[4770]: I0126 20:01:19.346767 4770 generic.go:334] "Generic (PLEG): container finished" podID="1be9b16c-1755-4497-8034-7534779e279d" containerID="c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3" exitCode=0 Jan 26 20:01:19 crc kubenswrapper[4770]: I0126 20:01:19.346810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerDied","Data":"c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3"} Jan 26 20:01:19 crc kubenswrapper[4770]: I0126 20:01:19.347002 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerStarted","Data":"1f54d98222577d89777530b86a07bde44bb8a32dbdf61727e812ec4fb4434641"} Jan 26 20:01:19 crc kubenswrapper[4770]: I0126 20:01:19.351443 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:01:21 crc kubenswrapper[4770]: I0126 20:01:21.369351 4770 generic.go:334] "Generic (PLEG): container finished" podID="1be9b16c-1755-4497-8034-7534779e279d" containerID="184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7" exitCode=0 Jan 26 20:01:21 crc kubenswrapper[4770]: I0126 20:01:21.369404 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerDied","Data":"184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7"} Jan 26 20:01:22 crc kubenswrapper[4770]: I0126 20:01:22.389650 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerStarted","Data":"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef"} Jan 26 20:01:22 crc kubenswrapper[4770]: I0126 20:01:22.420276 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ldzz" podStartSLOduration=2.9133844079999998 podStartE2EDuration="5.420258385s" podCreationTimestamp="2026-01-26 20:01:17 +0000 UTC" firstStartedPulling="2026-01-26 20:01:19.351189858 +0000 UTC m=+4763.916096590" lastFinishedPulling="2026-01-26 20:01:21.858063835 +0000 UTC m=+4766.422970567" observedRunningTime="2026-01-26 20:01:22.416375251 +0000 UTC m=+4766.981282003" watchObservedRunningTime="2026-01-26 20:01:22.420258385 +0000 UTC m=+4766.985165117" Jan 26 20:01:28 crc kubenswrapper[4770]: I0126 20:01:28.003404 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:28 crc kubenswrapper[4770]: I0126 20:01:28.004046 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:28 crc kubenswrapper[4770]: I0126 20:01:28.096551 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:28 crc kubenswrapper[4770]: I0126 20:01:28.502357 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:30 crc kubenswrapper[4770]: I0126 20:01:30.277895 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:30 crc kubenswrapper[4770]: I0126 20:01:30.330518 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:01:30 crc kubenswrapper[4770]: I0126 20:01:30.330578 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:01:30 crc kubenswrapper[4770]: I0126 20:01:30.469865 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ldzz" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="registry-server" containerID="cri-o://af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef" gracePeriod=2 Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.062022 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.069799 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities\") pod \"1be9b16c-1755-4497-8034-7534779e279d\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.069921 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r885c\" (UniqueName: \"kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c\") pod \"1be9b16c-1755-4497-8034-7534779e279d\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.070090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content\") pod \"1be9b16c-1755-4497-8034-7534779e279d\" (UID: \"1be9b16c-1755-4497-8034-7534779e279d\") " Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.070826 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities" (OuterVolumeSpecName: "utilities") pod "1be9b16c-1755-4497-8034-7534779e279d" (UID: "1be9b16c-1755-4497-8034-7534779e279d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.075406 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c" (OuterVolumeSpecName: "kube-api-access-r885c") pod "1be9b16c-1755-4497-8034-7534779e279d" (UID: "1be9b16c-1755-4497-8034-7534779e279d"). InnerVolumeSpecName "kube-api-access-r885c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.102065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1be9b16c-1755-4497-8034-7534779e279d" (UID: "1be9b16c-1755-4497-8034-7534779e279d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.173347 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.173413 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be9b16c-1755-4497-8034-7534779e279d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.173435 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r885c\" (UniqueName: \"kubernetes.io/projected/1be9b16c-1755-4497-8034-7534779e279d-kube-api-access-r885c\") on node \"crc\" DevicePath \"\"" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.484601 4770 generic.go:334] "Generic (PLEG): container finished" podID="1be9b16c-1755-4497-8034-7534779e279d" containerID="af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef" exitCode=0 Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.484655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerDied","Data":"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef"} Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.484690 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ldzz" event={"ID":"1be9b16c-1755-4497-8034-7534779e279d","Type":"ContainerDied","Data":"1f54d98222577d89777530b86a07bde44bb8a32dbdf61727e812ec4fb4434641"} Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.484735 4770 scope.go:117] "RemoveContainer" containerID="af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.484794 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ldzz" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.508410 4770 scope.go:117] "RemoveContainer" containerID="184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7" Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.542979 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.553150 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ldzz"] Jan 26 20:01:31 crc kubenswrapper[4770]: I0126 20:01:31.778887 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be9b16c-1755-4497-8034-7534779e279d" path="/var/lib/kubelet/pods/1be9b16c-1755-4497-8034-7534779e279d/volumes" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.047818 4770 scope.go:117] "RemoveContainer" containerID="c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.078826 4770 scope.go:117] "RemoveContainer" containerID="af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef" Jan 26 20:01:32 crc kubenswrapper[4770]: E0126 20:01:32.079435 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef\": container with ID starting with af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef not found: ID does not exist" containerID="af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.079511 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef"} err="failed to get container status \"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef\": rpc error: code = NotFound desc = could not find container \"af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef\": container with ID starting with af34203a6d9f868f7cbe126e23aacd4ea14294ead4a52dd8efb3f2644c6440ef not found: ID does not exist" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.079553 4770 scope.go:117] "RemoveContainer" containerID="184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7" Jan 26 20:01:32 crc kubenswrapper[4770]: E0126 20:01:32.080006 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7\": container with ID starting with 184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7 not found: ID does not exist" containerID="184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.080064 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7"} err="failed to get container status \"184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7\": rpc error: code = NotFound desc = could not find container \"184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7\": container with ID starting with 184091fd3d51f8c104f7413c711b8ac2d0533ade0d2b7f9e24ffa96da1d547d7 not found: ID does not exist" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.080094 4770 scope.go:117] "RemoveContainer" containerID="c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3" Jan 26 20:01:32 crc kubenswrapper[4770]: E0126 20:01:32.080429 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3\": container with ID starting with c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3 not found: ID does not exist" containerID="c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3" Jan 26 20:01:32 crc kubenswrapper[4770]: I0126 20:01:32.080465 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3"} err="failed to get container status \"c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3\": rpc error: code = NotFound desc = could not find container \"c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3\": container with ID starting with c36a01390b26bc9a4c0f7f4d951637033789e29abba1091278350f194c5652e3 not found: ID does not exist" Jan 26 20:02:00 crc kubenswrapper[4770]: I0126 20:02:00.331098 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:02:00 crc kubenswrapper[4770]: I0126 20:02:00.331614 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:02:30 crc kubenswrapper[4770]: I0126 20:02:30.330328 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:02:30 crc kubenswrapper[4770]: I0126 20:02:30.331051 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:02:30 crc kubenswrapper[4770]: I0126 20:02:30.331115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:02:30 crc kubenswrapper[4770]: I0126 20:02:30.332223 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:02:30 crc kubenswrapper[4770]: I0126 20:02:30.332318 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e" gracePeriod=600 Jan 26 20:02:31 crc kubenswrapper[4770]: I0126 20:02:31.240593 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e" exitCode=0 Jan 26 20:02:31 crc kubenswrapper[4770]: I0126 20:02:31.240712 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e"} Jan 26 20:02:31 crc kubenswrapper[4770]: I0126 20:02:31.241229 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c"} Jan 26 20:02:31 crc kubenswrapper[4770]: I0126 20:02:31.241256 4770 scope.go:117] "RemoveContainer" containerID="02fe07d9808f04ea366bd81f76e79c8886813c1cae940ffc672dd8a8f8dbe75a" Jan 26 20:04:30 crc kubenswrapper[4770]: I0126 20:04:30.331133 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:04:30 crc kubenswrapper[4770]: I0126 20:04:30.331752 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:04:51 crc kubenswrapper[4770]: E0126 20:04:51.899677 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:57542->38.102.83.51:41531: write tcp 38.102.83.51:57542->38.102.83.51:41531: write: connection reset by peer Jan 26 20:05:00 crc kubenswrapper[4770]: I0126 20:05:00.331273 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:05:00 crc kubenswrapper[4770]: I0126 20:05:00.332006 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:05:30 crc kubenswrapper[4770]: I0126 20:05:30.330299 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:05:30 crc kubenswrapper[4770]: I0126 20:05:30.331013 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:05:30 crc kubenswrapper[4770]: I0126 20:05:30.331074 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:05:30 crc kubenswrapper[4770]: I0126 20:05:30.331989 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:05:30 crc kubenswrapper[4770]: I0126 20:05:30.332076 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" gracePeriod=600 Jan 26 20:05:30 crc kubenswrapper[4770]: E0126 20:05:30.480729 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:05:30 crc kubenswrapper[4770]: E0126 20:05:30.606851 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6109a686_3ab2_465e_8a96_354f2ecbf491.slice/crio-conmon-9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c.scope\": RecentStats: unable to find data in memory cache]" Jan 26 20:05:31 crc kubenswrapper[4770]: I0126 20:05:31.355526 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" exitCode=0 Jan 26 20:05:31 crc kubenswrapper[4770]: I0126 20:05:31.355594 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c"} Jan 26 20:05:31 crc kubenswrapper[4770]: I0126 20:05:31.356015 4770 scope.go:117] "RemoveContainer" containerID="0fe3e29fd61144d4631a8c82432d61e0186e58f66a0e6bac3819bac70fc2507e" Jan 26 20:05:31 crc kubenswrapper[4770]: I0126 20:05:31.356944 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:05:31 crc kubenswrapper[4770]: E0126 20:05:31.357541 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:05:44 crc kubenswrapper[4770]: I0126 20:05:44.767565 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:05:44 crc kubenswrapper[4770]: E0126 20:05:44.769022 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:05:56 crc kubenswrapper[4770]: I0126 20:05:56.768470 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:05:56 crc kubenswrapper[4770]: E0126 20:05:56.769466 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:06:08 crc kubenswrapper[4770]: I0126 20:06:08.767323 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:06:08 crc kubenswrapper[4770]: E0126 20:06:08.768229 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:06:23 crc kubenswrapper[4770]: I0126 20:06:23.767866 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:06:23 crc kubenswrapper[4770]: E0126 20:06:23.771386 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:06:36 crc kubenswrapper[4770]: I0126 20:06:36.767662 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:06:36 crc kubenswrapper[4770]: E0126 20:06:36.768474 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:06:49 crc kubenswrapper[4770]: I0126 20:06:49.767519 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:06:49 crc kubenswrapper[4770]: E0126 20:06:49.770270 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:07:03 crc kubenswrapper[4770]: I0126 20:07:03.767860 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:07:03 crc kubenswrapper[4770]: E0126 20:07:03.770368 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:07:15 crc kubenswrapper[4770]: I0126 20:07:15.774248 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:07:15 crc kubenswrapper[4770]: E0126 20:07:15.775246 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:07:26 crc kubenswrapper[4770]: I0126 20:07:26.767635 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:07:26 crc kubenswrapper[4770]: E0126 20:07:26.770661 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:07:41 crc kubenswrapper[4770]: I0126 20:07:41.767623 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:07:41 crc kubenswrapper[4770]: E0126 20:07:41.768770 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:07:55 crc kubenswrapper[4770]: I0126 20:07:55.779744 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:07:55 crc kubenswrapper[4770]: E0126 20:07:55.780408 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:08:10 crc kubenswrapper[4770]: I0126 20:08:10.767629 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:08:10 crc kubenswrapper[4770]: E0126 20:08:10.768758 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:08:23 crc kubenswrapper[4770]: I0126 20:08:23.767247 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:08:23 crc kubenswrapper[4770]: E0126 20:08:23.768434 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:08:36 crc kubenswrapper[4770]: I0126 20:08:36.767393 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:08:36 crc kubenswrapper[4770]: E0126 20:08:36.768827 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:08:51 crc kubenswrapper[4770]: I0126 20:08:51.768542 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:08:51 crc kubenswrapper[4770]: E0126 20:08:51.769567 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:09:02 crc kubenswrapper[4770]: I0126 20:09:02.768552 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:09:02 crc kubenswrapper[4770]: E0126 20:09:02.769847 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:09:15 crc kubenswrapper[4770]: I0126 20:09:15.776031 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:09:15 crc kubenswrapper[4770]: E0126 20:09:15.777066 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:09:27 crc kubenswrapper[4770]: I0126 20:09:27.768609 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:09:27 crc kubenswrapper[4770]: E0126 20:09:27.769884 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:09:38 crc kubenswrapper[4770]: I0126 20:09:38.766827 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:09:38 crc kubenswrapper[4770]: E0126 20:09:38.767636 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:09:51 crc kubenswrapper[4770]: I0126 20:09:51.767291 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:09:51 crc kubenswrapper[4770]: E0126 20:09:51.768030 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:10:03 crc kubenswrapper[4770]: I0126 20:10:03.767738 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:10:03 crc kubenswrapper[4770]: E0126 20:10:03.768858 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.521221 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:09 crc kubenswrapper[4770]: E0126 20:10:09.522217 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="extract-content" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.522234 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="extract-content" Jan 26 20:10:09 crc kubenswrapper[4770]: E0126 20:10:09.522264 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="registry-server" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.522273 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="registry-server" Jan 26 20:10:09 crc kubenswrapper[4770]: E0126 20:10:09.522313 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="extract-utilities" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.522321 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="extract-utilities" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.522578 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be9b16c-1755-4497-8034-7534779e279d" containerName="registry-server" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.524323 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.543388 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.665948 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbct\" (UniqueName: \"kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.666035 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.666109 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.768095 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.768272 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.768464 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbct\" (UniqueName: \"kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.768900 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.769035 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.813755 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbct\" (UniqueName: \"kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct\") pod \"certified-operators-hnjtk\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:09 crc kubenswrapper[4770]: I0126 20:10:09.852888 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:10 crc kubenswrapper[4770]: I0126 20:10:10.402065 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:11 crc kubenswrapper[4770]: I0126 20:10:11.382968 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f6e308-b874-454e-ab59-e74a2999f34e" containerID="a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac" exitCode=0 Jan 26 20:10:11 crc kubenswrapper[4770]: I0126 20:10:11.383037 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerDied","Data":"a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac"} Jan 26 20:10:11 crc kubenswrapper[4770]: I0126 20:10:11.383379 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerStarted","Data":"e2a87a056aba0091041b9c504bc495a0e78ca2c94cca811d7330de95249d806e"} Jan 26 20:10:11 crc kubenswrapper[4770]: I0126 20:10:11.386307 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:10:13 crc kubenswrapper[4770]: I0126 20:10:13.406176 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f6e308-b874-454e-ab59-e74a2999f34e" containerID="290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab" exitCode=0 Jan 26 20:10:13 crc kubenswrapper[4770]: I0126 20:10:13.406497 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerDied","Data":"290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab"} Jan 26 20:10:14 crc kubenswrapper[4770]: I0126 20:10:14.417647 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerStarted","Data":"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc"} Jan 26 20:10:17 crc kubenswrapper[4770]: I0126 20:10:17.767496 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:10:17 crc kubenswrapper[4770]: E0126 20:10:17.768925 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:10:19 crc kubenswrapper[4770]: I0126 20:10:19.853885 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:19 crc kubenswrapper[4770]: I0126 20:10:19.854608 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:19 crc kubenswrapper[4770]: I0126 20:10:19.939136 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:19 crc kubenswrapper[4770]: I0126 20:10:19.970951 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hnjtk" podStartSLOduration=8.531607456 podStartE2EDuration="10.970919502s" podCreationTimestamp="2026-01-26 20:10:09 +0000 UTC" firstStartedPulling="2026-01-26 20:10:11.385931964 +0000 UTC m=+5295.950838726" lastFinishedPulling="2026-01-26 20:10:13.82524401 +0000 UTC m=+5298.390150772" observedRunningTime="2026-01-26 20:10:14.43938974 +0000 UTC m=+5299.004296482" watchObservedRunningTime="2026-01-26 20:10:19.970919502 +0000 UTC m=+5304.535826264" Jan 26 20:10:20 crc kubenswrapper[4770]: I0126 20:10:20.578435 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:20 crc kubenswrapper[4770]: I0126 20:10:20.643958 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:22 crc kubenswrapper[4770]: I0126 20:10:22.521566 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hnjtk" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="registry-server" containerID="cri-o://201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc" gracePeriod=2 Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.070338 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.110932 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities\") pod \"48f6e308-b874-454e-ab59-e74a2999f34e\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.111170 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbct\" (UniqueName: \"kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct\") pod \"48f6e308-b874-454e-ab59-e74a2999f34e\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.111327 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content\") pod \"48f6e308-b874-454e-ab59-e74a2999f34e\" (UID: \"48f6e308-b874-454e-ab59-e74a2999f34e\") " Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.112115 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities" (OuterVolumeSpecName: "utilities") pod "48f6e308-b874-454e-ab59-e74a2999f34e" (UID: "48f6e308-b874-454e-ab59-e74a2999f34e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.117267 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct" (OuterVolumeSpecName: "kube-api-access-tvbct") pod "48f6e308-b874-454e-ab59-e74a2999f34e" (UID: "48f6e308-b874-454e-ab59-e74a2999f34e"). InnerVolumeSpecName "kube-api-access-tvbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.176490 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48f6e308-b874-454e-ab59-e74a2999f34e" (UID: "48f6e308-b874-454e-ab59-e74a2999f34e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.214764 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.215072 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbct\" (UniqueName: \"kubernetes.io/projected/48f6e308-b874-454e-ab59-e74a2999f34e-kube-api-access-tvbct\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.215198 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f6e308-b874-454e-ab59-e74a2999f34e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.536906 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f6e308-b874-454e-ab59-e74a2999f34e" containerID="201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc" exitCode=0 Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.536993 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnjtk" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.537023 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerDied","Data":"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc"} Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.537815 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnjtk" event={"ID":"48f6e308-b874-454e-ab59-e74a2999f34e","Type":"ContainerDied","Data":"e2a87a056aba0091041b9c504bc495a0e78ca2c94cca811d7330de95249d806e"} Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.537839 4770 scope.go:117] "RemoveContainer" containerID="201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.564224 4770 scope.go:117] "RemoveContainer" containerID="290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.599549 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.613968 4770 scope.go:117] "RemoveContainer" containerID="a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.614914 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hnjtk"] Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.666898 4770 scope.go:117] "RemoveContainer" containerID="201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc" Jan 26 20:10:23 crc kubenswrapper[4770]: E0126 20:10:23.667333 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc\": container with ID starting with 201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc not found: ID does not exist" containerID="201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.667369 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc"} err="failed to get container status \"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc\": rpc error: code = NotFound desc = could not find container \"201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc\": container with ID starting with 201b4a63bd7225fe0f3af165a97e1956298a25c3b406d079b8ff3847381288bc not found: ID does not exist" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.667392 4770 scope.go:117] "RemoveContainer" containerID="290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab" Jan 26 20:10:23 crc kubenswrapper[4770]: E0126 20:10:23.667587 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab\": container with ID starting with 290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab not found: ID does not exist" containerID="290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.667607 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab"} err="failed to get container status \"290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab\": rpc error: code = NotFound desc = could not find container \"290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab\": container with ID starting with 290c25457d4fd912255938a8378c5ce3f11f75dc10cbaf4e624a7db1b2006bab not found: ID does not exist" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.667622 4770 scope.go:117] "RemoveContainer" containerID="a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac" Jan 26 20:10:23 crc kubenswrapper[4770]: E0126 20:10:23.667802 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac\": container with ID starting with a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac not found: ID does not exist" containerID="a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.667818 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac"} err="failed to get container status \"a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac\": rpc error: code = NotFound desc = could not find container \"a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac\": container with ID starting with a789cbc56e1ea4a1db13d2e89616ddfe40f681ffca4acf6ed01436b46a2c3bac not found: ID does not exist" Jan 26 20:10:23 crc kubenswrapper[4770]: I0126 20:10:23.787004 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" path="/var/lib/kubelet/pods/48f6e308-b874-454e-ab59-e74a2999f34e/volumes" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.616168 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:26 crc kubenswrapper[4770]: E0126 20:10:26.622054 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="extract-utilities" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.622181 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="extract-utilities" Jan 26 20:10:26 crc kubenswrapper[4770]: E0126 20:10:26.622463 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="extract-content" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.622475 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="extract-content" Jan 26 20:10:26 crc kubenswrapper[4770]: E0126 20:10:26.622494 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="registry-server" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.622506 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="registry-server" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.622867 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f6e308-b874-454e-ab59-e74a2999f34e" containerName="registry-server" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.625320 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.651325 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.799118 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.799179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.799423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp28x\" (UniqueName: \"kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.902486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.902598 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.902736 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp28x\" (UniqueName: \"kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.902945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.903071 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.938040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp28x\" (UniqueName: \"kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x\") pod \"redhat-operators-tk4w5\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:26 crc kubenswrapper[4770]: I0126 20:10:26.951848 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:27 crc kubenswrapper[4770]: I0126 20:10:27.408869 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:27 crc kubenswrapper[4770]: I0126 20:10:27.576024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerStarted","Data":"e340ddb2402544ec792a899ffbb054b89f73578ee9aaaee9225c9e1b11f7b3b4"} Jan 26 20:10:28 crc kubenswrapper[4770]: I0126 20:10:28.594401 4770 generic.go:334] "Generic (PLEG): container finished" podID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerID="cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57" exitCode=0 Jan 26 20:10:28 crc kubenswrapper[4770]: I0126 20:10:28.594756 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerDied","Data":"cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57"} Jan 26 20:10:29 crc kubenswrapper[4770]: I0126 20:10:29.609589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerStarted","Data":"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4"} Jan 26 20:10:30 crc kubenswrapper[4770]: I0126 20:10:30.767490 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:10:32 crc kubenswrapper[4770]: I0126 20:10:32.654652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46"} Jan 26 20:10:33 crc kubenswrapper[4770]: I0126 20:10:33.670788 4770 generic.go:334] "Generic (PLEG): container finished" podID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerID="90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4" exitCode=0 Jan 26 20:10:33 crc kubenswrapper[4770]: I0126 20:10:33.670870 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerDied","Data":"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4"} Jan 26 20:10:34 crc kubenswrapper[4770]: I0126 20:10:34.684555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerStarted","Data":"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520"} Jan 26 20:10:34 crc kubenswrapper[4770]: I0126 20:10:34.709280 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tk4w5" podStartSLOduration=3.197611909 podStartE2EDuration="8.709257657s" podCreationTimestamp="2026-01-26 20:10:26 +0000 UTC" firstStartedPulling="2026-01-26 20:10:28.60437984 +0000 UTC m=+5313.169286572" lastFinishedPulling="2026-01-26 20:10:34.116025578 +0000 UTC m=+5318.680932320" observedRunningTime="2026-01-26 20:10:34.708762153 +0000 UTC m=+5319.273668895" watchObservedRunningTime="2026-01-26 20:10:34.709257657 +0000 UTC m=+5319.274164389" Jan 26 20:10:36 crc kubenswrapper[4770]: I0126 20:10:36.952172 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:36 crc kubenswrapper[4770]: I0126 20:10:36.953200 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:38 crc kubenswrapper[4770]: I0126 20:10:38.034747 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tk4w5" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="registry-server" probeResult="failure" output=< Jan 26 20:10:38 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:10:38 crc kubenswrapper[4770]: > Jan 26 20:10:47 crc kubenswrapper[4770]: I0126 20:10:47.027525 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:47 crc kubenswrapper[4770]: I0126 20:10:47.093584 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:47 crc kubenswrapper[4770]: I0126 20:10:47.272001 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:48 crc kubenswrapper[4770]: I0126 20:10:48.848843 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tk4w5" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="registry-server" containerID="cri-o://66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520" gracePeriod=2 Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.421501 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.536340 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content\") pod \"81cdf843-4508-40d8-9139-18830ac7e4bc\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.536460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp28x\" (UniqueName: \"kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x\") pod \"81cdf843-4508-40d8-9139-18830ac7e4bc\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.536531 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities\") pod \"81cdf843-4508-40d8-9139-18830ac7e4bc\" (UID: \"81cdf843-4508-40d8-9139-18830ac7e4bc\") " Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.537828 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities" (OuterVolumeSpecName: "utilities") pod "81cdf843-4508-40d8-9139-18830ac7e4bc" (UID: "81cdf843-4508-40d8-9139-18830ac7e4bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.542953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x" (OuterVolumeSpecName: "kube-api-access-bp28x") pod "81cdf843-4508-40d8-9139-18830ac7e4bc" (UID: "81cdf843-4508-40d8-9139-18830ac7e4bc"). InnerVolumeSpecName "kube-api-access-bp28x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.639481 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp28x\" (UniqueName: \"kubernetes.io/projected/81cdf843-4508-40d8-9139-18830ac7e4bc-kube-api-access-bp28x\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.639538 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.685014 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81cdf843-4508-40d8-9139-18830ac7e4bc" (UID: "81cdf843-4508-40d8-9139-18830ac7e4bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.741176 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81cdf843-4508-40d8-9139-18830ac7e4bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.859462 4770 generic.go:334] "Generic (PLEG): container finished" podID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerID="66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520" exitCode=0 Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.859506 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerDied","Data":"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520"} Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.859560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tk4w5" event={"ID":"81cdf843-4508-40d8-9139-18830ac7e4bc","Type":"ContainerDied","Data":"e340ddb2402544ec792a899ffbb054b89f73578ee9aaaee9225c9e1b11f7b3b4"} Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.859553 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tk4w5" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.859659 4770 scope.go:117] "RemoveContainer" containerID="66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.881459 4770 scope.go:117] "RemoveContainer" containerID="90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.885331 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.900931 4770 scope.go:117] "RemoveContainer" containerID="cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.904394 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tk4w5"] Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.960213 4770 scope.go:117] "RemoveContainer" containerID="66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520" Jan 26 20:10:49 crc kubenswrapper[4770]: E0126 20:10:49.960797 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520\": container with ID starting with 66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520 not found: ID does not exist" containerID="66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.960840 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520"} err="failed to get container status \"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520\": rpc error: code = NotFound desc = could not find container \"66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520\": container with ID starting with 66e05ce03daf167fa4ba8c615ac7879c45a4795ff7597b669eeb2acfbc4c1520 not found: ID does not exist" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.960870 4770 scope.go:117] "RemoveContainer" containerID="90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4" Jan 26 20:10:49 crc kubenswrapper[4770]: E0126 20:10:49.961205 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4\": container with ID starting with 90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4 not found: ID does not exist" containerID="90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.961250 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4"} err="failed to get container status \"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4\": rpc error: code = NotFound desc = could not find container \"90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4\": container with ID starting with 90547cd533eb5237a6b53db79f614cdde3db2ca31587834958641b9b2dee18e4 not found: ID does not exist" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.961279 4770 scope.go:117] "RemoveContainer" containerID="cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57" Jan 26 20:10:49 crc kubenswrapper[4770]: E0126 20:10:49.961854 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57\": container with ID starting with cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57 not found: ID does not exist" containerID="cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57" Jan 26 20:10:49 crc kubenswrapper[4770]: I0126 20:10:49.961878 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57"} err="failed to get container status \"cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57\": rpc error: code = NotFound desc = could not find container \"cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57\": container with ID starting with cfd6b4665a5096acaec3e828ceb38d0a189cff2c16058d09ecb3d181b3bb9f57 not found: ID does not exist" Jan 26 20:10:51 crc kubenswrapper[4770]: I0126 20:10:51.780647 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" path="/var/lib/kubelet/pods/81cdf843-4508-40d8-9139-18830ac7e4bc/volumes" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.925451 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:11:57 crc kubenswrapper[4770]: E0126 20:11:57.927920 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="extract-utilities" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.927942 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="extract-utilities" Jan 26 20:11:57 crc kubenswrapper[4770]: E0126 20:11:57.927982 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="extract-content" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.927992 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="extract-content" Jan 26 20:11:57 crc kubenswrapper[4770]: E0126 20:11:57.928036 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="registry-server" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.928046 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="registry-server" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.928578 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cdf843-4508-40d8-9139-18830ac7e4bc" containerName="registry-server" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.935862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:57 crc kubenswrapper[4770]: I0126 20:11:57.949831 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.068269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbpp\" (UniqueName: \"kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.068634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.068739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.170331 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbpp\" (UniqueName: \"kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.170470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.170523 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.170974 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.171048 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.274780 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbpp\" (UniqueName: \"kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp\") pod \"redhat-marketplace-hxl5d\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.288120 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:11:58 crc kubenswrapper[4770]: I0126 20:11:58.758607 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:11:59 crc kubenswrapper[4770]: I0126 20:11:59.675296 4770 generic.go:334] "Generic (PLEG): container finished" podID="3534ad86-60bf-48cd-95f7-840006ba1620" containerID="3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76" exitCode=0 Jan 26 20:11:59 crc kubenswrapper[4770]: I0126 20:11:59.675369 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerDied","Data":"3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76"} Jan 26 20:11:59 crc kubenswrapper[4770]: I0126 20:11:59.677041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerStarted","Data":"204bc9ade2a14fc32996720ff1af447e4fbcb71826e5090135fc47fea103b725"} Jan 26 20:12:05 crc kubenswrapper[4770]: I0126 20:12:05.755605 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerStarted","Data":"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff"} Jan 26 20:12:06 crc kubenswrapper[4770]: I0126 20:12:06.772012 4770 generic.go:334] "Generic (PLEG): container finished" podID="3534ad86-60bf-48cd-95f7-840006ba1620" containerID="5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff" exitCode=0 Jan 26 20:12:06 crc kubenswrapper[4770]: I0126 20:12:06.772045 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerDied","Data":"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff"} Jan 26 20:12:07 crc kubenswrapper[4770]: I0126 20:12:07.784559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerStarted","Data":"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60"} Jan 26 20:12:07 crc kubenswrapper[4770]: I0126 20:12:07.814366 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hxl5d" podStartSLOduration=3.309773763 podStartE2EDuration="10.814344789s" podCreationTimestamp="2026-01-26 20:11:57 +0000 UTC" firstStartedPulling="2026-01-26 20:11:59.67837676 +0000 UTC m=+5404.243283532" lastFinishedPulling="2026-01-26 20:12:07.182947816 +0000 UTC m=+5411.747854558" observedRunningTime="2026-01-26 20:12:07.803161239 +0000 UTC m=+5412.368067961" watchObservedRunningTime="2026-01-26 20:12:07.814344789 +0000 UTC m=+5412.379251531" Jan 26 20:12:08 crc kubenswrapper[4770]: I0126 20:12:08.289900 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:08 crc kubenswrapper[4770]: I0126 20:12:08.289955 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:09 crc kubenswrapper[4770]: I0126 20:12:09.334284 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hxl5d" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="registry-server" probeResult="failure" output=< Jan 26 20:12:09 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:12:09 crc kubenswrapper[4770]: > Jan 26 20:12:18 crc kubenswrapper[4770]: I0126 20:12:18.383644 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:18 crc kubenswrapper[4770]: I0126 20:12:18.458505 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:18 crc kubenswrapper[4770]: I0126 20:12:18.627548 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:12:19 crc kubenswrapper[4770]: I0126 20:12:19.964464 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hxl5d" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="registry-server" containerID="cri-o://9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60" gracePeriod=2 Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.485143 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.543559 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmbpp\" (UniqueName: \"kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp\") pod \"3534ad86-60bf-48cd-95f7-840006ba1620\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.543741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities\") pod \"3534ad86-60bf-48cd-95f7-840006ba1620\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.543884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content\") pod \"3534ad86-60bf-48cd-95f7-840006ba1620\" (UID: \"3534ad86-60bf-48cd-95f7-840006ba1620\") " Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.544523 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities" (OuterVolumeSpecName: "utilities") pod "3534ad86-60bf-48cd-95f7-840006ba1620" (UID: "3534ad86-60bf-48cd-95f7-840006ba1620"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.555640 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp" (OuterVolumeSpecName: "kube-api-access-jmbpp") pod "3534ad86-60bf-48cd-95f7-840006ba1620" (UID: "3534ad86-60bf-48cd-95f7-840006ba1620"). InnerVolumeSpecName "kube-api-access-jmbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.576921 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3534ad86-60bf-48cd-95f7-840006ba1620" (UID: "3534ad86-60bf-48cd-95f7-840006ba1620"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.646128 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.646160 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3534ad86-60bf-48cd-95f7-840006ba1620-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.646174 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmbpp\" (UniqueName: \"kubernetes.io/projected/3534ad86-60bf-48cd-95f7-840006ba1620-kube-api-access-jmbpp\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.984857 4770 generic.go:334] "Generic (PLEG): container finished" podID="3534ad86-60bf-48cd-95f7-840006ba1620" containerID="9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60" exitCode=0 Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.984905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerDied","Data":"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60"} Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.984967 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hxl5d" event={"ID":"3534ad86-60bf-48cd-95f7-840006ba1620","Type":"ContainerDied","Data":"204bc9ade2a14fc32996720ff1af447e4fbcb71826e5090135fc47fea103b725"} Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.984991 4770 scope.go:117] "RemoveContainer" containerID="9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60" Jan 26 20:12:20 crc kubenswrapper[4770]: I0126 20:12:20.985003 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hxl5d" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.019876 4770 scope.go:117] "RemoveContainer" containerID="5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.039043 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.057639 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hxl5d"] Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.062888 4770 scope.go:117] "RemoveContainer" containerID="3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.125118 4770 scope.go:117] "RemoveContainer" containerID="9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60" Jan 26 20:12:21 crc kubenswrapper[4770]: E0126 20:12:21.126172 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60\": container with ID starting with 9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60 not found: ID does not exist" containerID="9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.126211 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60"} err="failed to get container status \"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60\": rpc error: code = NotFound desc = could not find container \"9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60\": container with ID starting with 9e7cfb75bb5be24394978fed706b46568b18b84ea2418817afa323fb26990e60 not found: ID does not exist" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.126237 4770 scope.go:117] "RemoveContainer" containerID="5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff" Jan 26 20:12:21 crc kubenswrapper[4770]: E0126 20:12:21.126553 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff\": container with ID starting with 5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff not found: ID does not exist" containerID="5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.126583 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff"} err="failed to get container status \"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff\": rpc error: code = NotFound desc = could not find container \"5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff\": container with ID starting with 5371ae58458ac86ccba7d7878a004e72e0c77e26c1200348cad9fa36e222a7ff not found: ID does not exist" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.126600 4770 scope.go:117] "RemoveContainer" containerID="3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76" Jan 26 20:12:21 crc kubenswrapper[4770]: E0126 20:12:21.126891 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76\": container with ID starting with 3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76 not found: ID does not exist" containerID="3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.127008 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76"} err="failed to get container status \"3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76\": rpc error: code = NotFound desc = could not find container \"3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76\": container with ID starting with 3d587fe5ff0aba545d42bba358725b607d0874d6eb0ec98c8f6e889815387d76 not found: ID does not exist" Jan 26 20:12:21 crc kubenswrapper[4770]: I0126 20:12:21.782309 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" path="/var/lib/kubelet/pods/3534ad86-60bf-48cd-95f7-840006ba1620/volumes" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.439249 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:28 crc kubenswrapper[4770]: E0126 20:12:28.448836 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="extract-utilities" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.448873 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="extract-utilities" Jan 26 20:12:28 crc kubenswrapper[4770]: E0126 20:12:28.448898 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="extract-content" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.448906 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="extract-content" Jan 26 20:12:28 crc kubenswrapper[4770]: E0126 20:12:28.448928 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="registry-server" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.448947 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="registry-server" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.449340 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3534ad86-60bf-48cd-95f7-840006ba1620" containerName="registry-server" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.450767 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.465977 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.537264 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.537320 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64slv\" (UniqueName: \"kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.537463 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.640121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.640585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.640726 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64slv\" (UniqueName: \"kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.640935 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.641046 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.684186 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64slv\" (UniqueName: \"kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv\") pod \"community-operators-dnmqk\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:28 crc kubenswrapper[4770]: I0126 20:12:28.786041 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:29 crc kubenswrapper[4770]: I0126 20:12:29.296061 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:30 crc kubenswrapper[4770]: I0126 20:12:30.124312 4770 generic.go:334] "Generic (PLEG): container finished" podID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerID="c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691" exitCode=0 Jan 26 20:12:30 crc kubenswrapper[4770]: I0126 20:12:30.124382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerDied","Data":"c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691"} Jan 26 20:12:30 crc kubenswrapper[4770]: I0126 20:12:30.124547 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerStarted","Data":"d2135d2b44cc36299bd783b33721f07ac80b8458e8149c43e2378ee1967e0ce1"} Jan 26 20:12:31 crc kubenswrapper[4770]: I0126 20:12:31.134459 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerStarted","Data":"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc"} Jan 26 20:12:32 crc kubenswrapper[4770]: I0126 20:12:32.148144 4770 generic.go:334] "Generic (PLEG): container finished" podID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerID="0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc" exitCode=0 Jan 26 20:12:32 crc kubenswrapper[4770]: I0126 20:12:32.148229 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerDied","Data":"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc"} Jan 26 20:12:33 crc kubenswrapper[4770]: I0126 20:12:33.159293 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerStarted","Data":"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e"} Jan 26 20:12:33 crc kubenswrapper[4770]: I0126 20:12:33.193030 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dnmqk" podStartSLOduration=2.784242067 podStartE2EDuration="5.193009594s" podCreationTimestamp="2026-01-26 20:12:28 +0000 UTC" firstStartedPulling="2026-01-26 20:12:30.126414176 +0000 UTC m=+5434.691320928" lastFinishedPulling="2026-01-26 20:12:32.535181693 +0000 UTC m=+5437.100088455" observedRunningTime="2026-01-26 20:12:33.181813253 +0000 UTC m=+5437.746719995" watchObservedRunningTime="2026-01-26 20:12:33.193009594 +0000 UTC m=+5437.757916346" Jan 26 20:12:38 crc kubenswrapper[4770]: I0126 20:12:38.786093 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:38 crc kubenswrapper[4770]: I0126 20:12:38.787237 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:38 crc kubenswrapper[4770]: I0126 20:12:38.877274 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:39 crc kubenswrapper[4770]: I0126 20:12:39.307350 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:39 crc kubenswrapper[4770]: I0126 20:12:39.393537 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.244274 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dnmqk" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="registry-server" containerID="cri-o://16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e" gracePeriod=2 Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.822901 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.956303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities\") pod \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.956389 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64slv\" (UniqueName: \"kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv\") pod \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.956445 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content\") pod \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\" (UID: \"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f\") " Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.957033 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities" (OuterVolumeSpecName: "utilities") pod "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" (UID: "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.959108 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:41 crc kubenswrapper[4770]: I0126 20:12:41.963707 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv" (OuterVolumeSpecName: "kube-api-access-64slv") pod "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" (UID: "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f"). InnerVolumeSpecName "kube-api-access-64slv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.032579 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" (UID: "713cdbc0-41c4-4045-a6ee-58f3a5f3e92f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.061519 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64slv\" (UniqueName: \"kubernetes.io/projected/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-kube-api-access-64slv\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.061571 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.260367 4770 generic.go:334] "Generic (PLEG): container finished" podID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerID="16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e" exitCode=0 Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.260410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerDied","Data":"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e"} Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.260435 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dnmqk" event={"ID":"713cdbc0-41c4-4045-a6ee-58f3a5f3e92f","Type":"ContainerDied","Data":"d2135d2b44cc36299bd783b33721f07ac80b8458e8149c43e2378ee1967e0ce1"} Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.260451 4770 scope.go:117] "RemoveContainer" containerID="16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.260491 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dnmqk" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.292581 4770 scope.go:117] "RemoveContainer" containerID="0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.331376 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.344738 4770 scope.go:117] "RemoveContainer" containerID="c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.348509 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dnmqk"] Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.404851 4770 scope.go:117] "RemoveContainer" containerID="16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e" Jan 26 20:12:42 crc kubenswrapper[4770]: E0126 20:12:42.405394 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e\": container with ID starting with 16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e not found: ID does not exist" containerID="16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.405450 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e"} err="failed to get container status \"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e\": rpc error: code = NotFound desc = could not find container \"16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e\": container with ID starting with 16dc301615e798ff3edb8076005a2697f51d415080548162e23b4f69856e618e not found: ID does not exist" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.405484 4770 scope.go:117] "RemoveContainer" containerID="0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc" Jan 26 20:12:42 crc kubenswrapper[4770]: E0126 20:12:42.405988 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc\": container with ID starting with 0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc not found: ID does not exist" containerID="0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.406027 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc"} err="failed to get container status \"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc\": rpc error: code = NotFound desc = could not find container \"0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc\": container with ID starting with 0fadd1959962ba91935573b8377f2fb2c669bf39caffbb29d3ae3bd4f869b0cc not found: ID does not exist" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.406052 4770 scope.go:117] "RemoveContainer" containerID="c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691" Jan 26 20:12:42 crc kubenswrapper[4770]: E0126 20:12:42.406604 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691\": container with ID starting with c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691 not found: ID does not exist" containerID="c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691" Jan 26 20:12:42 crc kubenswrapper[4770]: I0126 20:12:42.406664 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691"} err="failed to get container status \"c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691\": rpc error: code = NotFound desc = could not find container \"c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691\": container with ID starting with c85deb24100587d88ce8ec0f35fd79e1623406c5fa3b3fc5d552bac05eb54691 not found: ID does not exist" Jan 26 20:12:43 crc kubenswrapper[4770]: I0126 20:12:43.786378 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" path="/var/lib/kubelet/pods/713cdbc0-41c4-4045-a6ee-58f3a5f3e92f/volumes" Jan 26 20:13:00 crc kubenswrapper[4770]: I0126 20:13:00.330625 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:13:00 crc kubenswrapper[4770]: I0126 20:13:00.331365 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:13:30 crc kubenswrapper[4770]: I0126 20:13:30.330278 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:13:30 crc kubenswrapper[4770]: I0126 20:13:30.331091 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:14:00 crc kubenswrapper[4770]: I0126 20:14:00.330626 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:14:00 crc kubenswrapper[4770]: I0126 20:14:00.331398 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:14:00 crc kubenswrapper[4770]: I0126 20:14:00.331476 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:14:00 crc kubenswrapper[4770]: I0126 20:14:00.332857 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:14:00 crc kubenswrapper[4770]: I0126 20:14:00.332973 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46" gracePeriod=600 Jan 26 20:14:01 crc kubenswrapper[4770]: I0126 20:14:01.199181 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46" exitCode=0 Jan 26 20:14:01 crc kubenswrapper[4770]: I0126 20:14:01.199264 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46"} Jan 26 20:14:01 crc kubenswrapper[4770]: I0126 20:14:01.199653 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00"} Jan 26 20:14:01 crc kubenswrapper[4770]: I0126 20:14:01.199675 4770 scope.go:117] "RemoveContainer" containerID="9ed678dc1bb59aad768c6f11b680c4d3fabec88e1d1b6fd978a41e77ee5cb37c" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.170443 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq"] Jan 26 20:15:00 crc kubenswrapper[4770]: E0126 20:15:00.171532 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="extract-utilities" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.171553 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="extract-utilities" Jan 26 20:15:00 crc kubenswrapper[4770]: E0126 20:15:00.171599 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="extract-content" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.171611 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="extract-content" Jan 26 20:15:00 crc kubenswrapper[4770]: E0126 20:15:00.171636 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="registry-server" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.171648 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="registry-server" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.171904 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="713cdbc0-41c4-4045-a6ee-58f3a5f3e92f" containerName="registry-server" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.172775 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.196728 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.196819 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.200756 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq"] Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.310744 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpx7x\" (UniqueName: \"kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.310846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.310873 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.413664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpx7x\" (UniqueName: \"kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.413936 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.413970 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.414892 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.432600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpx7x\" (UniqueName: \"kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.485220 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume\") pod \"collect-profiles-29490975-vvggq\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.492295 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:00 crc kubenswrapper[4770]: I0126 20:15:00.959373 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq"] Jan 26 20:15:01 crc kubenswrapper[4770]: I0126 20:15:01.878888 4770 generic.go:334] "Generic (PLEG): container finished" podID="63e34718-d364-47b7-a254-70f5d1dd44c6" containerID="e2c3f60fbfa9d86dbefea34411e2f4eb723a76c53358fa8c0f26f55d2c042e68" exitCode=0 Jan 26 20:15:01 crc kubenswrapper[4770]: I0126 20:15:01.878937 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" event={"ID":"63e34718-d364-47b7-a254-70f5d1dd44c6","Type":"ContainerDied","Data":"e2c3f60fbfa9d86dbefea34411e2f4eb723a76c53358fa8c0f26f55d2c042e68"} Jan 26 20:15:01 crc kubenswrapper[4770]: I0126 20:15:01.879519 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" event={"ID":"63e34718-d364-47b7-a254-70f5d1dd44c6","Type":"ContainerStarted","Data":"4c843b78256273688f72efc0b88b52fb82f620b601408b61e78054e3fca9f781"} Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.354070 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.483960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume\") pod \"63e34718-d364-47b7-a254-70f5d1dd44c6\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.484057 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume\") pod \"63e34718-d364-47b7-a254-70f5d1dd44c6\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.484934 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "63e34718-d364-47b7-a254-70f5d1dd44c6" (UID: "63e34718-d364-47b7-a254-70f5d1dd44c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.485096 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpx7x\" (UniqueName: \"kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x\") pod \"63e34718-d364-47b7-a254-70f5d1dd44c6\" (UID: \"63e34718-d364-47b7-a254-70f5d1dd44c6\") " Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.486124 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63e34718-d364-47b7-a254-70f5d1dd44c6-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.492659 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x" (OuterVolumeSpecName: "kube-api-access-xpx7x") pod "63e34718-d364-47b7-a254-70f5d1dd44c6" (UID: "63e34718-d364-47b7-a254-70f5d1dd44c6"). InnerVolumeSpecName "kube-api-access-xpx7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.493595 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63e34718-d364-47b7-a254-70f5d1dd44c6" (UID: "63e34718-d364-47b7-a254-70f5d1dd44c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.588654 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpx7x\" (UniqueName: \"kubernetes.io/projected/63e34718-d364-47b7-a254-70f5d1dd44c6-kube-api-access-xpx7x\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.589279 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63e34718-d364-47b7-a254-70f5d1dd44c6-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.908778 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" event={"ID":"63e34718-d364-47b7-a254-70f5d1dd44c6","Type":"ContainerDied","Data":"4c843b78256273688f72efc0b88b52fb82f620b601408b61e78054e3fca9f781"} Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.908836 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c843b78256273688f72efc0b88b52fb82f620b601408b61e78054e3fca9f781" Jan 26 20:15:03 crc kubenswrapper[4770]: I0126 20:15:03.908900 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490975-vvggq" Jan 26 20:15:04 crc kubenswrapper[4770]: I0126 20:15:04.447609 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82"] Jan 26 20:15:04 crc kubenswrapper[4770]: I0126 20:15:04.457879 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490930-b2m82"] Jan 26 20:15:05 crc kubenswrapper[4770]: I0126 20:15:05.789408 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d079b2-ed45-425a-8682-50d0b1d00711" path="/var/lib/kubelet/pods/15d079b2-ed45-425a-8682-50d0b1d00711/volumes" Jan 26 20:15:22 crc kubenswrapper[4770]: I0126 20:15:22.320324 4770 scope.go:117] "RemoveContainer" containerID="823c974bf07b9f0e7a657b2d221c428851373e4694b7e5e111dddf143b0183f9" Jan 26 20:15:59 crc kubenswrapper[4770]: I0126 20:15:59.563000 4770 generic.go:334] "Generic (PLEG): container finished" podID="b864a6fc-56ae-4c06-ad45-4ca55e1afd91" containerID="b46a2231cd68fe4e5cea96b1d49ebff0e75e85d23b4e74ff5b1e476ddd8d377d" exitCode=0 Jan 26 20:15:59 crc kubenswrapper[4770]: I0126 20:15:59.563090 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b864a6fc-56ae-4c06-ad45-4ca55e1afd91","Type":"ContainerDied","Data":"b46a2231cd68fe4e5cea96b1d49ebff0e75e85d23b4e74ff5b1e476ddd8d377d"} Jan 26 20:16:00 crc kubenswrapper[4770]: I0126 20:16:00.330370 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:16:00 crc kubenswrapper[4770]: I0126 20:16:00.330830 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.068397 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203145 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203235 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203271 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2wq4\" (UniqueName: \"kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203325 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203467 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203538 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203571 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.203632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary\") pod \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\" (UID: \"b864a6fc-56ae-4c06-ad45-4ca55e1afd91\") " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.204571 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data" (OuterVolumeSpecName: "config-data") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.204794 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.210532 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4" (OuterVolumeSpecName: "kube-api-access-l2wq4") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "kube-api-access-l2wq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.224707 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.230399 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.235783 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.242608 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.245884 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.275603 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b864a6fc-56ae-4c06-ad45-4ca55e1afd91" (UID: "b864a6fc-56ae-4c06-ad45-4ca55e1afd91"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.305843 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306077 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2wq4\" (UniqueName: \"kubernetes.io/projected/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-kube-api-access-l2wq4\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306152 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306215 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306275 4770 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306328 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306387 4770 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306450 4770 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.306513 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b864a6fc-56ae-4c06-ad45-4ca55e1afd91-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.325371 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.408393 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.589648 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b864a6fc-56ae-4c06-ad45-4ca55e1afd91","Type":"ContainerDied","Data":"2c0300b29e0ccfd6d30135d3cc6ca7a1311634e6d7918e7eaa26be971534e0ae"} Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.589691 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0300b29e0ccfd6d30135d3cc6ca7a1311634e6d7918e7eaa26be971534e0ae" Jan 26 20:16:01 crc kubenswrapper[4770]: I0126 20:16:01.589782 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.972323 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:16:10 crc kubenswrapper[4770]: E0126 20:16:10.973425 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e34718-d364-47b7-a254-70f5d1dd44c6" containerName="collect-profiles" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.973442 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e34718-d364-47b7-a254-70f5d1dd44c6" containerName="collect-profiles" Jan 26 20:16:10 crc kubenswrapper[4770]: E0126 20:16:10.973478 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b864a6fc-56ae-4c06-ad45-4ca55e1afd91" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.973485 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b864a6fc-56ae-4c06-ad45-4ca55e1afd91" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.973743 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e34718-d364-47b7-a254-70f5d1dd44c6" containerName="collect-profiles" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.973767 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b864a6fc-56ae-4c06-ad45-4ca55e1afd91" containerName="tempest-tests-tempest-tests-runner" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.974654 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.979192 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hkh56" Jan 26 20:16:10 crc kubenswrapper[4770]: I0126 20:16:10.985947 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.118432 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq87\" (UniqueName: \"kubernetes.io/projected/6d5f3552-6711-4496-a6c3-b15ee1664349-kube-api-access-5dq87\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.118846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.220958 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.221458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dq87\" (UniqueName: \"kubernetes.io/projected/6d5f3552-6711-4496-a6c3-b15ee1664349-kube-api-access-5dq87\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.221784 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.245192 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dq87\" (UniqueName: \"kubernetes.io/projected/6d5f3552-6711-4496-a6c3-b15ee1664349-kube-api-access-5dq87\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.247575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6d5f3552-6711-4496-a6c3-b15ee1664349\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.306743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.783335 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 20:16:11 crc kubenswrapper[4770]: I0126 20:16:11.785344 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:16:12 crc kubenswrapper[4770]: I0126 20:16:12.741060 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6d5f3552-6711-4496-a6c3-b15ee1664349","Type":"ContainerStarted","Data":"ea881f72f528f514880d87b4038d485c23d88b7b1f7e23b4da3ee58aa92a2d10"} Jan 26 20:16:13 crc kubenswrapper[4770]: I0126 20:16:13.753360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6d5f3552-6711-4496-a6c3-b15ee1664349","Type":"ContainerStarted","Data":"a2dba86763d074ee4f49cb251d3ddb49a08a1b5147ffddf731c418473e96fb7b"} Jan 26 20:16:13 crc kubenswrapper[4770]: I0126 20:16:13.775073 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.878529574 podStartE2EDuration="3.775053642s" podCreationTimestamp="2026-01-26 20:16:10 +0000 UTC" firstStartedPulling="2026-01-26 20:16:11.785085853 +0000 UTC m=+5656.349992595" lastFinishedPulling="2026-01-26 20:16:12.681609931 +0000 UTC m=+5657.246516663" observedRunningTime="2026-01-26 20:16:13.769437942 +0000 UTC m=+5658.334344704" watchObservedRunningTime="2026-01-26 20:16:13.775053642 +0000 UTC m=+5658.339960394" Jan 26 20:16:30 crc kubenswrapper[4770]: I0126 20:16:30.330687 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:16:30 crc kubenswrapper[4770]: I0126 20:16:30.331623 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.598175 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztzq5/must-gather-fbv7n"] Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.600341 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.602457 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ztzq5"/"openshift-service-ca.crt" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.603150 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ztzq5"/"kube-root-ca.crt" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.609072 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ztzq5"/"default-dockercfg-4nz8q" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.609080 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ztzq5/must-gather-fbv7n"] Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.710211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.710293 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkf2\" (UniqueName: \"kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.811722 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.811821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkf2\" (UniqueName: \"kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.812601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.839926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkf2\" (UniqueName: \"kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2\") pod \"must-gather-fbv7n\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:38 crc kubenswrapper[4770]: I0126 20:16:38.919377 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:16:39 crc kubenswrapper[4770]: I0126 20:16:39.411800 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ztzq5/must-gather-fbv7n"] Jan 26 20:16:39 crc kubenswrapper[4770]: W0126 20:16:39.420443 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c31a052_de8a_4db1_8b0c_308790e7f533.slice/crio-ea1e8c044f356de18b02556135ae3004338e5e537b6852488d32a783d9901eeb WatchSource:0}: Error finding container ea1e8c044f356de18b02556135ae3004338e5e537b6852488d32a783d9901eeb: Status 404 returned error can't find the container with id ea1e8c044f356de18b02556135ae3004338e5e537b6852488d32a783d9901eeb Jan 26 20:16:40 crc kubenswrapper[4770]: I0126 20:16:40.113913 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" event={"ID":"0c31a052-de8a-4db1-8b0c-308790e7f533","Type":"ContainerStarted","Data":"ea1e8c044f356de18b02556135ae3004338e5e537b6852488d32a783d9901eeb"} Jan 26 20:16:49 crc kubenswrapper[4770]: I0126 20:16:49.233035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" event={"ID":"0c31a052-de8a-4db1-8b0c-308790e7f533","Type":"ContainerStarted","Data":"a79000ebfb695e74019ad15e5f8590c3883404116544af21410a74e9a948d750"} Jan 26 20:16:49 crc kubenswrapper[4770]: I0126 20:16:49.233654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" event={"ID":"0c31a052-de8a-4db1-8b0c-308790e7f533","Type":"ContainerStarted","Data":"b38ca10a1d1d7a5e409f483320d336ff7d167c8adceb4be48d08cab67f6686d3"} Jan 26 20:16:49 crc kubenswrapper[4770]: I0126 20:16:49.256630 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" podStartSLOduration=2.383921899 podStartE2EDuration="11.256615466s" podCreationTimestamp="2026-01-26 20:16:38 +0000 UTC" firstStartedPulling="2026-01-26 20:16:39.422222782 +0000 UTC m=+5683.987129534" lastFinishedPulling="2026-01-26 20:16:48.294916329 +0000 UTC m=+5692.859823101" observedRunningTime="2026-01-26 20:16:49.254944021 +0000 UTC m=+5693.819850763" watchObservedRunningTime="2026-01-26 20:16:49.256615466 +0000 UTC m=+5693.821522198" Jan 26 20:16:51 crc kubenswrapper[4770]: E0126 20:16:51.918174 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:44984->38.102.83.51:41531: write tcp 38.102.83.51:44984->38.102.83.51:41531: write: broken pipe Jan 26 20:16:52 crc kubenswrapper[4770]: E0126 20:16:52.181463 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.51:45048->38.102.83.51:41531: write tcp 38.102.83.51:45048->38.102.83.51:41531: write: broken pipe Jan 26 20:16:52 crc kubenswrapper[4770]: I0126 20:16:52.934106 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-s7r44"] Jan 26 20:16:52 crc kubenswrapper[4770]: I0126 20:16:52.935505 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.036982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbp7w\" (UniqueName: \"kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.037207 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.139135 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.139365 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbp7w\" (UniqueName: \"kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.139360 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.167933 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbp7w\" (UniqueName: \"kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w\") pod \"crc-debug-s7r44\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: I0126 20:16:53.250980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:16:53 crc kubenswrapper[4770]: W0126 20:16:53.283872 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090fb1b0_6773_4d9b_9fca_fbc09b0203ff.slice/crio-05cfeebab06b340b9ab0c5a1429c6339765d888ca4fb0532eeef34103ceafa4a WatchSource:0}: Error finding container 05cfeebab06b340b9ab0c5a1429c6339765d888ca4fb0532eeef34103ceafa4a: Status 404 returned error can't find the container with id 05cfeebab06b340b9ab0c5a1429c6339765d888ca4fb0532eeef34103ceafa4a Jan 26 20:16:54 crc kubenswrapper[4770]: I0126 20:16:54.287518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" event={"ID":"090fb1b0-6773-4d9b-9fca-fbc09b0203ff","Type":"ContainerStarted","Data":"05cfeebab06b340b9ab0c5a1429c6339765d888ca4fb0532eeef34103ceafa4a"} Jan 26 20:17:00 crc kubenswrapper[4770]: I0126 20:17:00.330836 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:17:00 crc kubenswrapper[4770]: I0126 20:17:00.331331 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:17:00 crc kubenswrapper[4770]: I0126 20:17:00.331376 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:17:00 crc kubenswrapper[4770]: I0126 20:17:00.332166 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:17:00 crc kubenswrapper[4770]: I0126 20:17:00.332218 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" gracePeriod=600 Jan 26 20:17:01 crc kubenswrapper[4770]: I0126 20:17:01.359149 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" exitCode=0 Jan 26 20:17:01 crc kubenswrapper[4770]: I0126 20:17:01.359297 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00"} Jan 26 20:17:01 crc kubenswrapper[4770]: I0126 20:17:01.359442 4770 scope.go:117] "RemoveContainer" containerID="941c2a6b7796e3b634da0bf18702ef70b7774b46ea7e444cdf5b6dba7973df46" Jan 26 20:17:05 crc kubenswrapper[4770]: E0126 20:17:05.947852 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:06 crc kubenswrapper[4770]: I0126 20:17:06.404374 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:17:06 crc kubenswrapper[4770]: E0126 20:17:06.405150 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:06 crc kubenswrapper[4770]: I0126 20:17:06.408068 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" event={"ID":"090fb1b0-6773-4d9b-9fca-fbc09b0203ff","Type":"ContainerStarted","Data":"1195d1d7b19f1fcb225881a1f2c8cedbd1c3684815710afbd27da06e49a39be2"} Jan 26 20:17:06 crc kubenswrapper[4770]: I0126 20:17:06.453367 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" podStartSLOduration=1.668138603 podStartE2EDuration="14.453349318s" podCreationTimestamp="2026-01-26 20:16:52 +0000 UTC" firstStartedPulling="2026-01-26 20:16:53.286200474 +0000 UTC m=+5697.851107206" lastFinishedPulling="2026-01-26 20:17:06.071411189 +0000 UTC m=+5710.636317921" observedRunningTime="2026-01-26 20:17:06.444970493 +0000 UTC m=+5711.009877225" watchObservedRunningTime="2026-01-26 20:17:06.453349318 +0000 UTC m=+5711.018256050" Jan 26 20:17:19 crc kubenswrapper[4770]: I0126 20:17:19.767364 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:17:19 crc kubenswrapper[4770]: E0126 20:17:19.768042 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:31 crc kubenswrapper[4770]: I0126 20:17:31.767802 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:17:31 crc kubenswrapper[4770]: E0126 20:17:31.768536 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:44 crc kubenswrapper[4770]: I0126 20:17:44.767733 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:17:44 crc kubenswrapper[4770]: E0126 20:17:44.768433 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:52 crc kubenswrapper[4770]: I0126 20:17:52.866574 4770 generic.go:334] "Generic (PLEG): container finished" podID="090fb1b0-6773-4d9b-9fca-fbc09b0203ff" containerID="1195d1d7b19f1fcb225881a1f2c8cedbd1c3684815710afbd27da06e49a39be2" exitCode=0 Jan 26 20:17:52 crc kubenswrapper[4770]: I0126 20:17:52.866744 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" event={"ID":"090fb1b0-6773-4d9b-9fca-fbc09b0203ff","Type":"ContainerDied","Data":"1195d1d7b19f1fcb225881a1f2c8cedbd1c3684815710afbd27da06e49a39be2"} Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.004668 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.048511 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-s7r44"] Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.060103 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-s7r44"] Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.130686 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host" (OuterVolumeSpecName: "host") pod "090fb1b0-6773-4d9b-9fca-fbc09b0203ff" (UID: "090fb1b0-6773-4d9b-9fca-fbc09b0203ff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.130583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host\") pod \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.131135 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbp7w\" (UniqueName: \"kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w\") pod \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\" (UID: \"090fb1b0-6773-4d9b-9fca-fbc09b0203ff\") " Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.133112 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.136997 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w" (OuterVolumeSpecName: "kube-api-access-lbp7w") pod "090fb1b0-6773-4d9b-9fca-fbc09b0203ff" (UID: "090fb1b0-6773-4d9b-9fca-fbc09b0203ff"). InnerVolumeSpecName "kube-api-access-lbp7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.235002 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbp7w\" (UniqueName: \"kubernetes.io/projected/090fb1b0-6773-4d9b-9fca-fbc09b0203ff-kube-api-access-lbp7w\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.884560 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05cfeebab06b340b9ab0c5a1429c6339765d888ca4fb0532eeef34103ceafa4a" Jan 26 20:17:54 crc kubenswrapper[4770]: I0126 20:17:54.884635 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-s7r44" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.260441 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-jg9vm"] Jan 26 20:17:55 crc kubenswrapper[4770]: E0126 20:17:55.260957 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090fb1b0-6773-4d9b-9fca-fbc09b0203ff" containerName="container-00" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.260975 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="090fb1b0-6773-4d9b-9fca-fbc09b0203ff" containerName="container-00" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.261333 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="090fb1b0-6773-4d9b-9fca-fbc09b0203ff" containerName="container-00" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.262259 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.357152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.357577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5bb2\" (UniqueName: \"kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.459635 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.459776 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5bb2\" (UniqueName: \"kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.460040 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.482624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5bb2\" (UniqueName: \"kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2\") pod \"crc-debug-jg9vm\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.590169 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.795469 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:17:55 crc kubenswrapper[4770]: E0126 20:17:55.796733 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.803192 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090fb1b0-6773-4d9b-9fca-fbc09b0203ff" path="/var/lib/kubelet/pods/090fb1b0-6773-4d9b-9fca-fbc09b0203ff/volumes" Jan 26 20:17:55 crc kubenswrapper[4770]: I0126 20:17:55.894238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" event={"ID":"e5d765f5-9fc5-4901-801e-626b9de6c1bd","Type":"ContainerStarted","Data":"b73802c06fe9c092fb8a84d2776764135b3452739520a4a44b9ae406b1adfa0d"} Jan 26 20:17:56 crc kubenswrapper[4770]: I0126 20:17:56.907403 4770 generic.go:334] "Generic (PLEG): container finished" podID="e5d765f5-9fc5-4901-801e-626b9de6c1bd" containerID="e5af79f7bf30486bad4f3964c997fb035b93948d83067c69f654ff50bfdc2311" exitCode=0 Jan 26 20:17:56 crc kubenswrapper[4770]: I0126 20:17:56.907599 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" event={"ID":"e5d765f5-9fc5-4901-801e-626b9de6c1bd","Type":"ContainerDied","Data":"e5af79f7bf30486bad4f3964c997fb035b93948d83067c69f654ff50bfdc2311"} Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.477167 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.515552 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5bb2\" (UniqueName: \"kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2\") pod \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.515632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host\") pod \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\" (UID: \"e5d765f5-9fc5-4901-801e-626b9de6c1bd\") " Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.516329 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host" (OuterVolumeSpecName: "host") pod "e5d765f5-9fc5-4901-801e-626b9de6c1bd" (UID: "e5d765f5-9fc5-4901-801e-626b9de6c1bd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.539987 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2" (OuterVolumeSpecName: "kube-api-access-w5bb2") pod "e5d765f5-9fc5-4901-801e-626b9de6c1bd" (UID: "e5d765f5-9fc5-4901-801e-626b9de6c1bd"). InnerVolumeSpecName "kube-api-access-w5bb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.617463 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5bb2\" (UniqueName: \"kubernetes.io/projected/e5d765f5-9fc5-4901-801e-626b9de6c1bd-kube-api-access-w5bb2\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.617488 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5d765f5-9fc5-4901-801e-626b9de6c1bd-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.944724 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-jg9vm"] Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.951583 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" event={"ID":"e5d765f5-9fc5-4901-801e-626b9de6c1bd","Type":"ContainerDied","Data":"b73802c06fe9c092fb8a84d2776764135b3452739520a4a44b9ae406b1adfa0d"} Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.951615 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73802c06fe9c092fb8a84d2776764135b3452739520a4a44b9ae406b1adfa0d" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.951665 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-jg9vm" Jan 26 20:17:58 crc kubenswrapper[4770]: I0126 20:17:58.968850 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-jg9vm"] Jan 26 20:17:59 crc kubenswrapper[4770]: I0126 20:17:59.782527 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d765f5-9fc5-4901-801e-626b9de6c1bd" path="/var/lib/kubelet/pods/e5d765f5-9fc5-4901-801e-626b9de6c1bd/volumes" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.144654 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-q7rqj"] Jan 26 20:18:00 crc kubenswrapper[4770]: E0126 20:18:00.145486 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d765f5-9fc5-4901-801e-626b9de6c1bd" containerName="container-00" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.145503 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d765f5-9fc5-4901-801e-626b9de6c1bd" containerName="container-00" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.145819 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d765f5-9fc5-4901-801e-626b9de6c1bd" containerName="container-00" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.146729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.249489 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.249998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24krb\" (UniqueName: \"kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.352024 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.352245 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.352322 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24krb\" (UniqueName: \"kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.576508 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24krb\" (UniqueName: \"kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb\") pod \"crc-debug-q7rqj\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.776677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:00 crc kubenswrapper[4770]: W0126 20:18:00.817143 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b6cb9d_ebff_47c7_8b58_f4921fad2221.slice/crio-c69f5f21360e2fc14e9fc4549a1e20494c569f224c43da0d78363d1488410743 WatchSource:0}: Error finding container c69f5f21360e2fc14e9fc4549a1e20494c569f224c43da0d78363d1488410743: Status 404 returned error can't find the container with id c69f5f21360e2fc14e9fc4549a1e20494c569f224c43da0d78363d1488410743 Jan 26 20:18:00 crc kubenswrapper[4770]: I0126 20:18:00.977519 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" event={"ID":"30b6cb9d-ebff-47c7-8b58-f4921fad2221","Type":"ContainerStarted","Data":"c69f5f21360e2fc14e9fc4549a1e20494c569f224c43da0d78363d1488410743"} Jan 26 20:18:01 crc kubenswrapper[4770]: I0126 20:18:01.990315 4770 generic.go:334] "Generic (PLEG): container finished" podID="30b6cb9d-ebff-47c7-8b58-f4921fad2221" containerID="afb599d5d7979d7b7a972cc749e00897ddc53e7e20edf478656c73a90f47bc52" exitCode=0 Jan 26 20:18:01 crc kubenswrapper[4770]: I0126 20:18:01.990366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" event={"ID":"30b6cb9d-ebff-47c7-8b58-f4921fad2221","Type":"ContainerDied","Data":"afb599d5d7979d7b7a972cc749e00897ddc53e7e20edf478656c73a90f47bc52"} Jan 26 20:18:02 crc kubenswrapper[4770]: I0126 20:18:02.048248 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-q7rqj"] Jan 26 20:18:02 crc kubenswrapper[4770]: I0126 20:18:02.059219 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztzq5/crc-debug-q7rqj"] Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.161141 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.233466 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24krb\" (UniqueName: \"kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb\") pod \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.233538 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host\") pod \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\" (UID: \"30b6cb9d-ebff-47c7-8b58-f4921fad2221\") " Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.233671 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host" (OuterVolumeSpecName: "host") pod "30b6cb9d-ebff-47c7-8b58-f4921fad2221" (UID: "30b6cb9d-ebff-47c7-8b58-f4921fad2221"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.234143 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30b6cb9d-ebff-47c7-8b58-f4921fad2221-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.239326 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb" (OuterVolumeSpecName: "kube-api-access-24krb") pod "30b6cb9d-ebff-47c7-8b58-f4921fad2221" (UID: "30b6cb9d-ebff-47c7-8b58-f4921fad2221"). InnerVolumeSpecName "kube-api-access-24krb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.336486 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24krb\" (UniqueName: \"kubernetes.io/projected/30b6cb9d-ebff-47c7-8b58-f4921fad2221-kube-api-access-24krb\") on node \"crc\" DevicePath \"\"" Jan 26 20:18:03 crc kubenswrapper[4770]: I0126 20:18:03.778410 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b6cb9d-ebff-47c7-8b58-f4921fad2221" path="/var/lib/kubelet/pods/30b6cb9d-ebff-47c7-8b58-f4921fad2221/volumes" Jan 26 20:18:04 crc kubenswrapper[4770]: I0126 20:18:04.016545 4770 scope.go:117] "RemoveContainer" containerID="afb599d5d7979d7b7a972cc749e00897ddc53e7e20edf478656c73a90f47bc52" Jan 26 20:18:04 crc kubenswrapper[4770]: I0126 20:18:04.016598 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/crc-debug-q7rqj" Jan 26 20:18:10 crc kubenswrapper[4770]: I0126 20:18:10.767942 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:18:10 crc kubenswrapper[4770]: E0126 20:18:10.769002 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:18:22 crc kubenswrapper[4770]: I0126 20:18:22.767373 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:18:22 crc kubenswrapper[4770]: E0126 20:18:22.768561 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:18:36 crc kubenswrapper[4770]: I0126 20:18:36.767524 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:18:36 crc kubenswrapper[4770]: E0126 20:18:36.770052 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.337237 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-545575dfd-bbtbf_46ff829b-eabe-4d50-a22f-4da3d6cf798f/barbican-api/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.536668 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-545575dfd-bbtbf_46ff829b-eabe-4d50-a22f-4da3d6cf798f/barbican-api-log/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.581955 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647c797856-n9jkj_e3807ac3-64e8-4132-8b60-59d034d69c52/barbican-keystone-listener/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.673386 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647c797856-n9jkj_e3807ac3-64e8-4132-8b60-59d034d69c52/barbican-keystone-listener-log/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.745944 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f49cc977f-jfnpn_5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd/barbican-worker/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.846502 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f49cc977f-jfnpn_5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd/barbican-worker-log/0.log" Jan 26 20:18:38 crc kubenswrapper[4770]: I0126 20:18:38.977417 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd_57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.149498 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/ceilometer-central-agent/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.172239 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/ceilometer-notification-agent/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.223212 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/proxy-httpd/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.242487 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/sg-core/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.421340 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cc30e4a5-148d-4296-b220-518e972b4f3b/cinder-api-log/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.753005 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0c995c7e-a30a-4482-98f4-1b88979f2702/probe/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.796097 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cc30e4a5-148d-4296-b220-518e972b4f3b/cinder-api/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.887328 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0c995c7e-a30a-4482-98f4-1b88979f2702/cinder-backup/0.log" Jan 26 20:18:39 crc kubenswrapper[4770]: I0126 20:18:39.984512 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bf3cbbc4-d990-4d7d-9514-28beda8c084e/cinder-scheduler/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.003644 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bf3cbbc4-d990-4d7d-9514-28beda8c084e/probe/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.197169 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_098c14a9-04f1-4bba-8770-cb3ba0add71e/probe/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.257186 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_098c14a9-04f1-4bba-8770-cb3ba0add71e/cinder-volume/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.429580 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c5b4494e-e5fd-4561-8b35-9993d10cbe6b/cinder-volume/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.472831 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c5b4494e-e5fd-4561-8b35-9993d10cbe6b/probe/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.524467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d_f64f037e-f80f-4f8d-be06-9917ac988deb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.667613 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl_f2cab92c-6548-4bab-82d8-f9cc534b88a8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.771020 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/init/0.log" Jan 26 20:18:40 crc kubenswrapper[4770]: I0126 20:18:40.925023 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/init/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.007145 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k_f9cfc064-c4a3-42cf-8193-9090da67b4db/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.097787 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/dnsmasq-dns/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.210060 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c98b34c0-4fc9-4b79-b664-bbc8ddb787a1/glance-httpd/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.239454 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c98b34c0-4fc9-4b79-b664-bbc8ddb787a1/glance-log/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.303295 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_93320a1f-7ced-4765-95a5-918a8fa2de1c/glance-httpd/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.366974 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_93320a1f-7ced-4765-95a5-918a8fa2de1c/glance-log/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.516074 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77b47dc986-cqqn6_65b445e3-2f98-4b3d-9290-4e7eff894ef0/horizon/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.610972 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr_514407e1-deb8-4ac4-bf0e-9b93842cb8f9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.719536 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zd9mk_51afe695-3612-4c67-8f8f-d7cf1c927b20/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:41 crc kubenswrapper[4770]: I0126 20:18:41.853986 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490961-npqvs_f567e5e2-7857-417c-8258-63661d995e06/keystone-cron/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.078095 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6994181f-05b0-468c-911a-4f910e017419/kube-state-metrics/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.191030 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77b47dc986-cqqn6_65b445e3-2f98-4b3d-9290-4e7eff894ef0/horizon-log/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.359009 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp_372fe502-3240-4adc-b60d-ae93c8a37430/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.503547 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-dfccf5f44-hghd8_d119257f-62e4-4f5b-8c56-3bd82b5b6041/keystone-api/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.802557 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c5fff9c7-vsc8j_061a1ade-3e2c-4fa3-af1d-79119e42b777/neutron-httpd/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.807964 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg_5c761917-b83c-4c4b-8aff-79848506a7cd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:42 crc kubenswrapper[4770]: I0126 20:18:42.892950 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c5fff9c7-vsc8j_061a1ade-3e2c-4fa3-af1d-79119e42b777/neutron-api/0.log" Jan 26 20:18:43 crc kubenswrapper[4770]: I0126 20:18:43.482254 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7b7b00b0-e2fe-4012-8d42-ed69e1345f94/nova-cell0-conductor-conductor/0.log" Jan 26 20:18:43 crc kubenswrapper[4770]: I0126 20:18:43.721632 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a5bd373-f3aa-42ca-8360-32e1de10c999/nova-cell1-conductor-conductor/0.log" Jan 26 20:18:44 crc kubenswrapper[4770]: I0126 20:18:44.074355 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fcf4af4b-a734-40c3-be45-ca0dd2a43124/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 20:18:44 crc kubenswrapper[4770]: I0126 20:18:44.214499 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q9qgt_c54172aa-4886-49b2-8834-ea8e8c57306e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:44 crc kubenswrapper[4770]: I0126 20:18:44.364839 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8af7a04a-7f7c-4e64-ab2d-40bb252db6ae/nova-api-log/0.log" Jan 26 20:18:44 crc kubenswrapper[4770]: I0126 20:18:44.550748 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7781b437-1736-47ca-b461-7fc8359ef733/nova-metadata-log/0.log" Jan 26 20:18:44 crc kubenswrapper[4770]: I0126 20:18:44.781668 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8af7a04a-7f7c-4e64-ab2d-40bb252db6ae/nova-api-api/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.004576 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/mysql-bootstrap/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.081842 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_76beecce-14cb-4546-9054-5b8bdd4293d9/nova-scheduler-scheduler/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.166349 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/mysql-bootstrap/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.184620 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/galera/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.414169 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/mysql-bootstrap/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.540521 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/galera/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.566951 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/mysql-bootstrap/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.734935 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_db423aff-dffd-46a6-bd83-765c623ab77c/openstackclient/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.838755 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hgfvf_9d2095b9-c866-4424-aa95-31718bd65d61/ovn-controller/0.log" Jan 26 20:18:45 crc kubenswrapper[4770]: I0126 20:18:45.999456 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t59c4_d9ff64ab-79f6-4941-8de7-b9edbea8439d/openstack-network-exporter/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.196264 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server-init/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.358994 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server-init/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.416138 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.658635 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hk2v6_483f1a9a-7983-4628-bc2e-ab37a776dcf6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.744401 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovs-vswitchd/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.828976 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7781b437-1736-47ca-b461-7fc8359ef733/nova-metadata-metadata/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.876628 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49994115-56ea-46a6-a7ae-bff2b9751bc8/openstack-network-exporter/0.log" Jan 26 20:18:46 crc kubenswrapper[4770]: I0126 20:18:46.995987 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49994115-56ea-46a6-a7ae-bff2b9751bc8/ovn-northd/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.051193 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b42faa6-0359-44d0-96ea-7264ab250ba4/openstack-network-exporter/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.051459 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b42faa6-0359-44d0-96ea-7264ab250ba4/ovsdbserver-nb/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.203380 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_23527c1a-fd08-4cc7-a6b7-48fe3988ac6e/openstack-network-exporter/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.266693 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_23527c1a-fd08-4cc7-a6b7-48fe3988ac6e/ovsdbserver-sb/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.651938 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/init-config-reloader/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.685475 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dfdbdd84d-x7fsz_a884b73b-0f60-4327-a836-b9c20f70b6e6/placement-api/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.718528 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dfdbdd84d-x7fsz_a884b73b-0f60-4327-a836-b9c20f70b6e6/placement-log/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.935766 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/prometheus/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.955429 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/init-config-reloader/0.log" Jan 26 20:18:47 crc kubenswrapper[4770]: I0126 20:18:47.963922 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/config-reloader/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.025468 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/thanos-sidecar/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.115324 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.414640 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/rabbitmq/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.423276 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.457254 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.687136 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.698184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/rabbitmq/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.722368 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.930651 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/setup-container/0.log" Jan 26 20:18:48 crc kubenswrapper[4770]: I0126 20:18:48.946397 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/rabbitmq/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.049490 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j_4fdf356a-1a71-4b6f-92aa-c2c3a963f28e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.141130 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-grrpm_dbfc185f-efba-4b46-b49a-0045340ae3cc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.268408 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k_4ae332ee-80e2-4c02-a235-a318900f5ab4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.428308 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6962s_01d59985-d42f-42a7-9af0-01420a06b702/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.537033 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vm5mv_5f5964e6-f0a0-459a-a754-dcefc5a6ee69/ssh-known-hosts-edpm-deployment/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.751994 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8688c56555-rsnrn_65d3af51-41f4-40e5-949e-a3eb611043bb/proxy-server/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.842781 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vx59z_ceb06b58-7f92-4704-909b-3c591476f04c/swift-ring-rebalance/0.log" Jan 26 20:18:49 crc kubenswrapper[4770]: I0126 20:18:49.912160 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8688c56555-rsnrn_65d3af51-41f4-40e5-949e-a3eb611043bb/proxy-httpd/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.021597 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-auditor/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.140370 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-reaper/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.158472 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-replicator/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.202381 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-server/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.260756 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-auditor/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.355922 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-server/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.378717 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-replicator/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.440970 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-updater/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.564790 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-expirer/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.611179 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-replicator/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.627159 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-auditor/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.627503 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-server/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.791531 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/rsync/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.793129 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-updater/0.log" Jan 26 20:18:50 crc kubenswrapper[4770]: I0126 20:18:50.845344 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/swift-recon-cron/0.log" Jan 26 20:18:51 crc kubenswrapper[4770]: I0126 20:18:51.185970 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b864a6fc-56ae-4c06-ad45-4ca55e1afd91/tempest-tests-tempest-tests-runner/0.log" Jan 26 20:18:51 crc kubenswrapper[4770]: I0126 20:18:51.213387 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8_50064c0b-e5a3-46a3-9053-536fcbe380a3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:51 crc kubenswrapper[4770]: I0126 20:18:51.241404 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6d5f3552-6711-4496-a6c3-b15ee1664349/test-operator-logs-container/0.log" Jan 26 20:18:51 crc kubenswrapper[4770]: I0126 20:18:51.411548 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-td8t6_608d349d-127c-4f0b-9a56-0368dcd0e46f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:18:51 crc kubenswrapper[4770]: I0126 20:18:51.767331 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:18:51 crc kubenswrapper[4770]: E0126 20:18:51.768430 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:18:52 crc kubenswrapper[4770]: I0126 20:18:52.071235 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_e5e85df5-499b-4543-aab5-e1d3ce9d1473/watcher-applier/0.log" Jan 26 20:18:52 crc kubenswrapper[4770]: I0126 20:18:52.203432 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eacb7365-d724-4d52-96c8-edb12977e1f3/memcached/0.log" Jan 26 20:18:52 crc kubenswrapper[4770]: I0126 20:18:52.737015 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_fbe6a16b-f234-4dcc-800e-7eb6338cc264/watcher-api-log/0.log" Jan 26 20:18:54 crc kubenswrapper[4770]: I0126 20:18:54.865289 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_e9760499-8609-4691-b587-2265122f7af7/watcher-decision-engine/0.log" Jan 26 20:18:55 crc kubenswrapper[4770]: I0126 20:18:55.620359 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_fbe6a16b-f234-4dcc-800e-7eb6338cc264/watcher-api/0.log" Jan 26 20:19:05 crc kubenswrapper[4770]: I0126 20:19:05.781240 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:19:05 crc kubenswrapper[4770]: E0126 20:19:05.784409 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:19:16 crc kubenswrapper[4770]: I0126 20:19:16.767863 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:19:16 crc kubenswrapper[4770]: E0126 20:19:16.768920 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.471713 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.587092 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.666126 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.727392 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.912115 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/extract/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.912765 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:19:18 crc kubenswrapper[4770]: I0126 20:19:18.930978 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.201787 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-x8m5l_1666ea4c-3865-4bc2-8741-29383616e875/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.204382 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-g9nzc_dc15189d-c78f-475d-9a49-dac90d4d4fcb/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.352958 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gwg5f_7dfabc71-10aa-4337-a700-6dda2a4819d5/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.465148 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-h2zrp_99b8587f-51d1-4cb2-a0ab-e131c9135388/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.532559 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-g4brh_c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.659787 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zn9m9_cc595d5d-2f69-47a8-a63f-7b4abce23fdd/manager/0.log" Jan 26 20:19:19 crc kubenswrapper[4770]: I0126 20:19:19.857974 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jg69w_0e7b29c5-2473-488f-a8cf-57863472bd68/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.042425 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-2tv9j_462ae2ba-a49e-4eb3-9d7e-0a853412206f/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.141748 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-v9wk4_68c5aef7-2f00-4a28-8a25-6af0a5cd4013/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.218377 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-58zsz_444d3be6-b12b-4473-abff-a5e5f35af270/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.373107 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n_7ac27e32-922a-4a46-9bb3-a3daa301dee7/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.411933 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-4bpjq_d427e158-3f69-44b8-abe3-1510fb4fdd1e/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.609579 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-pfz5s_2b2f16ec-bd97-4ff0-acf6-af298b2f3736/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.616302 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-8wtk6_ffc82616-ae6f-4f03-9c55-c235cd7cb5ff/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.772458 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz_b594f7f1-d369-4dd7-8d7f-2969df165fb4/manager/0.log" Jan 26 20:19:20 crc kubenswrapper[4770]: I0126 20:19:20.980184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bf847bbdc-9phhr_b2b075a6-2519-42f2-876d-c0249db54ca4/operator/0.log" Jan 26 20:19:21 crc kubenswrapper[4770]: I0126 20:19:21.181113 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cttfq_9093abfb-eda1-4bea-a7c8-1610996eec7c/registry-server/0.log" Jan 26 20:19:21 crc kubenswrapper[4770]: I0126 20:19:21.501827 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-745tt_6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3/manager/0.log" Jan 26 20:19:21 crc kubenswrapper[4770]: I0126 20:19:21.537769 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-gwfqm_b6b3bfbb-893b-4122-8534-664e57faa6ce/manager/0.log" Jan 26 20:19:21 crc kubenswrapper[4770]: I0126 20:19:21.987378 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9fnjm_ed015d41-0a86-45bc-ac7b-410e6ef09b6e/operator/0.log" Jan 26 20:19:22 crc kubenswrapper[4770]: I0126 20:19:22.212527 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6xngb_1fb1320e-c82f-4927-a48b-94ce5b6dcc03/manager/0.log" Jan 26 20:19:22 crc kubenswrapper[4770]: I0126 20:19:22.434312 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6796fcb5b-6wf85_c24f34a9-cf76-44f8-8435-ff01eca67ce3/manager/0.log" Jan 26 20:19:22 crc kubenswrapper[4770]: I0126 20:19:22.461644 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-jllkr_bce0b4ae-6301-4b38-b960-13962608dab0/manager/0.log" Jan 26 20:19:22 crc kubenswrapper[4770]: I0126 20:19:22.539335 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4vb4t_752eb71a-ee7a-47da-8945-41eee7a8c6b3/manager/0.log" Jan 26 20:19:22 crc kubenswrapper[4770]: I0126 20:19:22.636825 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6bf5b95546-9qq5g_d9a28594-7011-4810-a859-972dcde899e9/manager/0.log" Jan 26 20:19:30 crc kubenswrapper[4770]: I0126 20:19:30.767308 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:19:30 crc kubenswrapper[4770]: E0126 20:19:30.768124 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:19:41 crc kubenswrapper[4770]: I0126 20:19:41.773395 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:19:41 crc kubenswrapper[4770]: E0126 20:19:41.788334 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:19:43 crc kubenswrapper[4770]: I0126 20:19:43.951048 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dz75h_23cf8f72-83fa-451e-afe9-08b8377f969d/control-plane-machine-set-operator/0.log" Jan 26 20:19:44 crc kubenswrapper[4770]: I0126 20:19:44.112578 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zm2q9_4cd4eed4-e59b-4987-936a-b880b81311a1/kube-rbac-proxy/0.log" Jan 26 20:19:44 crc kubenswrapper[4770]: I0126 20:19:44.181765 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zm2q9_4cd4eed4-e59b-4987-936a-b880b81311a1/machine-api-operator/0.log" Jan 26 20:19:56 crc kubenswrapper[4770]: I0126 20:19:56.768273 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:19:56 crc kubenswrapper[4770]: E0126 20:19:56.769255 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:19:58 crc kubenswrapper[4770]: I0126 20:19:58.143340 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-zg6pt_5e274239-64f9-423e-a00b-0867c43ce747/cert-manager-controller/0.log" Jan 26 20:19:58 crc kubenswrapper[4770]: I0126 20:19:58.290479 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wbbh5_f363eee1-0b76-4000-9ab2-8506a4ccb1db/cert-manager-webhook/0.log" Jan 26 20:19:58 crc kubenswrapper[4770]: I0126 20:19:58.293743 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xvnht_5d309de3-0825-4929-9867-fdcd48df6320/cert-manager-cainjector/0.log" Jan 26 20:20:09 crc kubenswrapper[4770]: I0126 20:20:09.767752 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:20:09 crc kubenswrapper[4770]: E0126 20:20:09.768486 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.512375 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qzgv8_4e872b8d-441d-4fe7-abe1-12d880b17f99/nmstate-console-plugin/0.log" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.623873 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-224k9_47ba2b14-2e10-43cd-9b79-1c9350662bc0/nmstate-handler/0.log" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.677384 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wx7dh_a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7/kube-rbac-proxy/0.log" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.703430 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wx7dh_a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7/nmstate-metrics/0.log" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.799973 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-5p9rk_c9d57646-d6ef-42b3-8d4e-445486b6e18d/nmstate-operator/0.log" Jan 26 20:20:12 crc kubenswrapper[4770]: I0126 20:20:12.866733 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qplrm_0f6003df-fc85-4c3a-ad98-822f6e7d670d/nmstate-webhook/0.log" Jan 26 20:20:23 crc kubenswrapper[4770]: I0126 20:20:23.767154 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:20:23 crc kubenswrapper[4770]: E0126 20:20:23.767904 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:20:26 crc kubenswrapper[4770]: I0126 20:20:26.684761 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fhb9k_3856ceb2-87c8-4db0-bbb8-66cf7713accc/prometheus-operator/0.log" Jan 26 20:20:26 crc kubenswrapper[4770]: I0126 20:20:26.792802 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js_2d01f9de-1cce-41c6-9a48-914289d32207/prometheus-operator-admission-webhook/0.log" Jan 26 20:20:26 crc kubenswrapper[4770]: I0126 20:20:26.876746 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5_2308db67-1c3e-465c-8574-58fe145f34e4/prometheus-operator-admission-webhook/0.log" Jan 26 20:20:26 crc kubenswrapper[4770]: I0126 20:20:26.995794 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kgxzc_5660d99f-cacd-4602-83a8-e6e152380afc/operator/0.log" Jan 26 20:20:27 crc kubenswrapper[4770]: I0126 20:20:27.090336 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gjmw8_1d294f34-81c6-46f1-9fa0-5950a2a7337f/perses-operator/0.log" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.852557 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:20:34 crc kubenswrapper[4770]: E0126 20:20:34.853460 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b6cb9d-ebff-47c7-8b58-f4921fad2221" containerName="container-00" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.853476 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b6cb9d-ebff-47c7-8b58-f4921fad2221" containerName="container-00" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.853745 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b6cb9d-ebff-47c7-8b58-f4921fad2221" containerName="container-00" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.855389 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.870541 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.935222 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.935422 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:34 crc kubenswrapper[4770]: I0126 20:20:34.935568 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xhr\" (UniqueName: \"kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.037894 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2xhr\" (UniqueName: \"kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.038141 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.038199 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.038714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.038767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.073530 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2xhr\" (UniqueName: \"kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr\") pod \"redhat-operators-dsnpw\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.194451 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.648802 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.652240 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.658649 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.751814 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76gc\" (UniqueName: \"kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.751878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.751960 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.837648 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.863766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h76gc\" (UniqueName: \"kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.863837 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.863993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.866198 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.866506 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.894457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h76gc\" (UniqueName: \"kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc\") pod \"certified-operators-qdwck\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:35 crc kubenswrapper[4770]: I0126 20:20:35.982725 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:36 crc kubenswrapper[4770]: I0126 20:20:36.506175 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:20:36 crc kubenswrapper[4770]: I0126 20:20:36.545820 4770 generic.go:334] "Generic (PLEG): container finished" podID="266b5150-3f62-4b07-a70b-12b0b148e097" containerID="da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b" exitCode=0 Jan 26 20:20:36 crc kubenswrapper[4770]: I0126 20:20:36.545897 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerDied","Data":"da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b"} Jan 26 20:20:36 crc kubenswrapper[4770]: I0126 20:20:36.545938 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerStarted","Data":"5768b62836f076dfb799a3c623fa2cb649fed3f6bf088e929c0ceb8b15510bc3"} Jan 26 20:20:36 crc kubenswrapper[4770]: I0126 20:20:36.550986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerStarted","Data":"44d8386fc78ab2ff2f64ce1522700fc717e0f940ac9bc51ae8a1822672fad710"} Jan 26 20:20:37 crc kubenswrapper[4770]: I0126 20:20:37.563298 4770 generic.go:334] "Generic (PLEG): container finished" podID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerID="4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f" exitCode=0 Jan 26 20:20:37 crc kubenswrapper[4770]: I0126 20:20:37.563398 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerDied","Data":"4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f"} Jan 26 20:20:37 crc kubenswrapper[4770]: I0126 20:20:37.767131 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:20:37 crc kubenswrapper[4770]: E0126 20:20:37.767626 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:20:38 crc kubenswrapper[4770]: I0126 20:20:38.575460 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerStarted","Data":"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b"} Jan 26 20:20:38 crc kubenswrapper[4770]: I0126 20:20:38.577234 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerStarted","Data":"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d"} Jan 26 20:20:42 crc kubenswrapper[4770]: I0126 20:20:42.616429 4770 generic.go:334] "Generic (PLEG): container finished" podID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerID="0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d" exitCode=0 Jan 26 20:20:42 crc kubenswrapper[4770]: I0126 20:20:42.616527 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerDied","Data":"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d"} Jan 26 20:20:43 crc kubenswrapper[4770]: I0126 20:20:43.706123 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgxhp_0fa5c4a3-9cf1-470f-a627-4d75201218c6/kube-rbac-proxy/0.log" Jan 26 20:20:43 crc kubenswrapper[4770]: I0126 20:20:43.824884 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgxhp_0fa5c4a3-9cf1-470f-a627-4d75201218c6/controller/0.log" Jan 26 20:20:43 crc kubenswrapper[4770]: I0126 20:20:43.919728 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.152475 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.163478 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.193799 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.202572 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.398353 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.436109 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.437739 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.488271 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.634132 4770 generic.go:334] "Generic (PLEG): container finished" podID="266b5150-3f62-4b07-a70b-12b0b148e097" containerID="1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b" exitCode=0 Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.634181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerDied","Data":"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b"} Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.689309 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/controller/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.709864 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.780054 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.782302 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.992615 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/frr-metrics/0.log" Jan 26 20:20:44 crc kubenswrapper[4770]: I0126 20:20:44.993272 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/kube-rbac-proxy-frr/0.log" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.018219 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/kube-rbac-proxy/0.log" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.334334 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-n5vnz_8f9a805c-9078-43b4-a52d-bb6c6d695422/frr-k8s-webhook-server/0.log" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.336722 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/reloader/0.log" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.592529 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-859d6f9486-gtpqr_ee88a890-d295-4129-8baf-ade3a43b3758/manager/0.log" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.649980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerStarted","Data":"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da"} Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.983786 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:45 crc kubenswrapper[4770]: I0126 20:20:45.983821 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.080750 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85d868fd8c-rclln_b7b62592-2dab-442b-a5ef-a02562b7ed0c/webhook-server/0.log" Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.086772 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxhr9_95fe3572-9eab-4945-bf35-bcf4cec1764d/kube-rbac-proxy/0.log" Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.667108 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerStarted","Data":"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605"} Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.694303 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qdwck" podStartSLOduration=3.939597387 podStartE2EDuration="11.694285374s" podCreationTimestamp="2026-01-26 20:20:35 +0000 UTC" firstStartedPulling="2026-01-26 20:20:37.627978841 +0000 UTC m=+5922.192885573" lastFinishedPulling="2026-01-26 20:20:45.382666828 +0000 UTC m=+5929.947573560" observedRunningTime="2026-01-26 20:20:45.670561164 +0000 UTC m=+5930.235467906" watchObservedRunningTime="2026-01-26 20:20:46.694285374 +0000 UTC m=+5931.259192096" Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.696817 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dsnpw" podStartSLOduration=3.211345921 podStartE2EDuration="12.696797382s" podCreationTimestamp="2026-01-26 20:20:34 +0000 UTC" firstStartedPulling="2026-01-26 20:20:36.547639411 +0000 UTC m=+5921.112546143" lastFinishedPulling="2026-01-26 20:20:46.033090872 +0000 UTC m=+5930.597997604" observedRunningTime="2026-01-26 20:20:46.692142696 +0000 UTC m=+5931.257049428" watchObservedRunningTime="2026-01-26 20:20:46.696797382 +0000 UTC m=+5931.261704114" Jan 26 20:20:46 crc kubenswrapper[4770]: I0126 20:20:46.708909 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/frr/0.log" Jan 26 20:20:47 crc kubenswrapper[4770]: I0126 20:20:47.014339 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxhr9_95fe3572-9eab-4945-bf35-bcf4cec1764d/speaker/0.log" Jan 26 20:20:47 crc kubenswrapper[4770]: I0126 20:20:47.027987 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qdwck" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="registry-server" probeResult="failure" output=< Jan 26 20:20:47 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:20:47 crc kubenswrapper[4770]: > Jan 26 20:20:49 crc kubenswrapper[4770]: I0126 20:20:49.768026 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:20:49 crc kubenswrapper[4770]: E0126 20:20:49.768908 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:20:55 crc kubenswrapper[4770]: I0126 20:20:55.195148 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:55 crc kubenswrapper[4770]: I0126 20:20:55.195646 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:20:56 crc kubenswrapper[4770]: I0126 20:20:56.027843 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:56 crc kubenswrapper[4770]: I0126 20:20:56.089656 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:20:56 crc kubenswrapper[4770]: I0126 20:20:56.254446 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dsnpw" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="registry-server" probeResult="failure" output=< Jan 26 20:20:56 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:20:56 crc kubenswrapper[4770]: > Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.037369 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.038157 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qdwck" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="registry-server" containerID="cri-o://df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da" gracePeriod=2 Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.518790 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.661961 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content\") pod \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.662128 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h76gc\" (UniqueName: \"kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc\") pod \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.662297 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities\") pod \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\" (UID: \"54e2d04a-3221-49e1-91ec-f19a595c6ba6\") " Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.663580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities" (OuterVolumeSpecName: "utilities") pod "54e2d04a-3221-49e1-91ec-f19a595c6ba6" (UID: "54e2d04a-3221-49e1-91ec-f19a595c6ba6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.682510 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc" (OuterVolumeSpecName: "kube-api-access-h76gc") pod "54e2d04a-3221-49e1-91ec-f19a595c6ba6" (UID: "54e2d04a-3221-49e1-91ec-f19a595c6ba6"). InnerVolumeSpecName "kube-api-access-h76gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.726427 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54e2d04a-3221-49e1-91ec-f19a595c6ba6" (UID: "54e2d04a-3221-49e1-91ec-f19a595c6ba6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.765278 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.765313 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54e2d04a-3221-49e1-91ec-f19a595c6ba6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.765325 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h76gc\" (UniqueName: \"kubernetes.io/projected/54e2d04a-3221-49e1-91ec-f19a595c6ba6-kube-api-access-h76gc\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.791135 4770 generic.go:334] "Generic (PLEG): container finished" podID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerID="df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da" exitCode=0 Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.791170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerDied","Data":"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da"} Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.791193 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdwck" event={"ID":"54e2d04a-3221-49e1-91ec-f19a595c6ba6","Type":"ContainerDied","Data":"44d8386fc78ab2ff2f64ce1522700fc717e0f940ac9bc51ae8a1822672fad710"} Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.791210 4770 scope.go:117] "RemoveContainer" containerID="df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.791326 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdwck" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.831396 4770 scope.go:117] "RemoveContainer" containerID="0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.835285 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.865523 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdwck"] Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.891518 4770 scope.go:117] "RemoveContainer" containerID="4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.942932 4770 scope.go:117] "RemoveContainer" containerID="df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da" Jan 26 20:21:00 crc kubenswrapper[4770]: E0126 20:21:00.943299 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da\": container with ID starting with df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da not found: ID does not exist" containerID="df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.943324 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da"} err="failed to get container status \"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da\": rpc error: code = NotFound desc = could not find container \"df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da\": container with ID starting with df303ef00691d18405379deb29952713eed9bc815b699a9aea09c79f7231c5da not found: ID does not exist" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.943343 4770 scope.go:117] "RemoveContainer" containerID="0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d" Jan 26 20:21:00 crc kubenswrapper[4770]: E0126 20:21:00.943740 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d\": container with ID starting with 0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d not found: ID does not exist" containerID="0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.943784 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d"} err="failed to get container status \"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d\": rpc error: code = NotFound desc = could not find container \"0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d\": container with ID starting with 0873671cb0aea86a23795814448354baa92a03d1929b749c2dacdc6f1e32083d not found: ID does not exist" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.943810 4770 scope.go:117] "RemoveContainer" containerID="4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f" Jan 26 20:21:00 crc kubenswrapper[4770]: E0126 20:21:00.944062 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f\": container with ID starting with 4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f not found: ID does not exist" containerID="4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f" Jan 26 20:21:00 crc kubenswrapper[4770]: I0126 20:21:00.944084 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f"} err="failed to get container status \"4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f\": rpc error: code = NotFound desc = could not find container \"4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f\": container with ID starting with 4894b2f2ebcc176a4b8df0e25dd7e4756e81343ab368def808c1ceb2e7a0f56f not found: ID does not exist" Jan 26 20:21:01 crc kubenswrapper[4770]: I0126 20:21:01.777619 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" path="/var/lib/kubelet/pods/54e2d04a-3221-49e1-91ec-f19a595c6ba6/volumes" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.186380 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.532500 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.639902 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.679791 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.803499 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.806686 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:21:02 crc kubenswrapper[4770]: I0126 20:21:02.893225 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/extract/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.003448 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.178777 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.227204 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.242867 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.384967 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/extract/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.395926 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.478755 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.600132 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.767827 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:21:03 crc kubenswrapper[4770]: E0126 20:21:03.768150 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.809362 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.810610 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.844965 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:21:03 crc kubenswrapper[4770]: I0126 20:21:03.968105 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.020263 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.066797 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/extract/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.163947 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.374672 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.380950 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.381453 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.515522 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.543811 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.814055 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:21:04 crc kubenswrapper[4770]: I0126 20:21:04.957033 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.010301 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.012266 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.253479 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.308417 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.321756 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.368580 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/registry-server/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.415241 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.579030 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ttbbg_38d944cd-c6cb-4cf6-ada9-9077a8b9102e/marketplace-operator/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.601617 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/registry-server/0.log" Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.836655 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:21:05 crc kubenswrapper[4770]: I0126 20:21:05.904165 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.102985 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.110330 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.113007 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.439626 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.514430 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.549674 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.554248 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/registry-server/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.665568 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.696568 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-content/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.719069 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-content/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.847448 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dsnpw" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="registry-server" containerID="cri-o://2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605" gracePeriod=2 Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.951197 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:21:06 crc kubenswrapper[4770]: I0126 20:21:06.988912 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/registry-server/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.005318 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-content/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.025690 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dsnpw_266b5150-3f62-4b07-a70b-12b0b148e097/extract-utilities/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.182919 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.231636 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.246071 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.310065 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.352937 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2xhr\" (UniqueName: \"kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr\") pod \"266b5150-3f62-4b07-a70b-12b0b148e097\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.353150 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content\") pod \"266b5150-3f62-4b07-a70b-12b0b148e097\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.353259 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities\") pod \"266b5150-3f62-4b07-a70b-12b0b148e097\" (UID: \"266b5150-3f62-4b07-a70b-12b0b148e097\") " Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.353897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities" (OuterVolumeSpecName: "utilities") pod "266b5150-3f62-4b07-a70b-12b0b148e097" (UID: "266b5150-3f62-4b07-a70b-12b0b148e097"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.363897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr" (OuterVolumeSpecName: "kube-api-access-f2xhr") pod "266b5150-3f62-4b07-a70b-12b0b148e097" (UID: "266b5150-3f62-4b07-a70b-12b0b148e097"). InnerVolumeSpecName "kube-api-access-f2xhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.374548 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.374578 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2xhr\" (UniqueName: \"kubernetes.io/projected/266b5150-3f62-4b07-a70b-12b0b148e097-kube-api-access-f2xhr\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.441751 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.471490 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "266b5150-3f62-4b07-a70b-12b0b148e097" (UID: "266b5150-3f62-4b07-a70b-12b0b148e097"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.477506 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266b5150-3f62-4b07-a70b-12b0b148e097-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.485933 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.866564 4770 generic.go:334] "Generic (PLEG): container finished" podID="266b5150-3f62-4b07-a70b-12b0b148e097" containerID="2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605" exitCode=0 Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.866603 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerDied","Data":"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605"} Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.866628 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dsnpw" event={"ID":"266b5150-3f62-4b07-a70b-12b0b148e097","Type":"ContainerDied","Data":"5768b62836f076dfb799a3c623fa2cb649fed3f6bf088e929c0ceb8b15510bc3"} Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.866638 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dsnpw" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.866655 4770 scope.go:117] "RemoveContainer" containerID="2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.890180 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.894839 4770 scope.go:117] "RemoveContainer" containerID="1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.897472 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dsnpw"] Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.932912 4770 scope.go:117] "RemoveContainer" containerID="da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.969913 4770 scope.go:117] "RemoveContainer" containerID="2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605" Jan 26 20:21:07 crc kubenswrapper[4770]: E0126 20:21:07.970939 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605\": container with ID starting with 2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605 not found: ID does not exist" containerID="2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.971063 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605"} err="failed to get container status \"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605\": rpc error: code = NotFound desc = could not find container \"2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605\": container with ID starting with 2ebdb724728d48977ad348b19d228d521f777cfa58270d886a12a6489f092605 not found: ID does not exist" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.971168 4770 scope.go:117] "RemoveContainer" containerID="1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b" Jan 26 20:21:07 crc kubenswrapper[4770]: E0126 20:21:07.971654 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b\": container with ID starting with 1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b not found: ID does not exist" containerID="1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.971725 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b"} err="failed to get container status \"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b\": rpc error: code = NotFound desc = could not find container \"1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b\": container with ID starting with 1472a3c56c8dd0630131bea1d434fd137a74e318d37a7c33c52244aa89dbc97b not found: ID does not exist" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.971790 4770 scope.go:117] "RemoveContainer" containerID="da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b" Jan 26 20:21:07 crc kubenswrapper[4770]: E0126 20:21:07.974847 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b\": container with ID starting with da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b not found: ID does not exist" containerID="da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b" Jan 26 20:21:07 crc kubenswrapper[4770]: I0126 20:21:07.974893 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b"} err="failed to get container status \"da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b\": rpc error: code = NotFound desc = could not find container \"da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b\": container with ID starting with da7e358dda2df5ab09412a778bff8a447b34021707bb1d0a0fda8b3342d3009b not found: ID does not exist" Jan 26 20:21:08 crc kubenswrapper[4770]: I0126 20:21:08.238078 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/registry-server/0.log" Jan 26 20:21:09 crc kubenswrapper[4770]: I0126 20:21:09.787445 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" path="/var/lib/kubelet/pods/266b5150-3f62-4b07-a70b-12b0b148e097/volumes" Jan 26 20:21:18 crc kubenswrapper[4770]: I0126 20:21:18.766900 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:21:18 crc kubenswrapper[4770]: E0126 20:21:18.767793 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:21:22 crc kubenswrapper[4770]: I0126 20:21:22.720724 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js_2d01f9de-1cce-41c6-9a48-914289d32207/prometheus-operator-admission-webhook/0.log" Jan 26 20:21:22 crc kubenswrapper[4770]: I0126 20:21:22.751543 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fhb9k_3856ceb2-87c8-4db0-bbb8-66cf7713accc/prometheus-operator/0.log" Jan 26 20:21:22 crc kubenswrapper[4770]: I0126 20:21:22.849322 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5_2308db67-1c3e-465c-8574-58fe145f34e4/prometheus-operator-admission-webhook/0.log" Jan 26 20:21:22 crc kubenswrapper[4770]: I0126 20:21:22.927090 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gjmw8_1d294f34-81c6-46f1-9fa0-5950a2a7337f/perses-operator/0.log" Jan 26 20:21:23 crc kubenswrapper[4770]: I0126 20:21:23.166835 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kgxzc_5660d99f-cacd-4602-83a8-e6e152380afc/operator/0.log" Jan 26 20:21:30 crc kubenswrapper[4770]: I0126 20:21:30.767647 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:21:30 crc kubenswrapper[4770]: E0126 20:21:30.768335 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:21:41 crc kubenswrapper[4770]: I0126 20:21:41.767872 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:21:41 crc kubenswrapper[4770]: E0126 20:21:41.768534 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:21:56 crc kubenswrapper[4770]: I0126 20:21:56.768058 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:21:56 crc kubenswrapper[4770]: E0126 20:21:56.769122 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:22:08 crc kubenswrapper[4770]: I0126 20:22:08.767621 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:22:09 crc kubenswrapper[4770]: I0126 20:22:09.554939 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e"} Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.507295 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508727 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508750 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508781 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="extract-utilities" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508798 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="extract-utilities" Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508816 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="extract-content" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508830 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="extract-content" Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508857 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="extract-content" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508870 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="extract-content" Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508893 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="extract-utilities" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508906 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="extract-utilities" Jan 26 20:22:15 crc kubenswrapper[4770]: E0126 20:22:15.508955 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.508967 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.509374 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e2d04a-3221-49e1-91ec-f19a595c6ba6" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.509408 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="266b5150-3f62-4b07-a70b-12b0b148e097" containerName="registry-server" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.512344 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.525139 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.617713 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82bw\" (UniqueName: \"kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.620713 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.620987 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.722615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.723084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82bw\" (UniqueName: \"kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.723363 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.724206 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.724532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.747909 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82bw\" (UniqueName: \"kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw\") pod \"redhat-marketplace-85xj8\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:15 crc kubenswrapper[4770]: I0126 20:22:15.851521 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:16 crc kubenswrapper[4770]: I0126 20:22:16.348766 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:16 crc kubenswrapper[4770]: W0126 20:22:16.357094 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e230f60_6312_49bc_a037_48d4b25b14bc.slice/crio-8ecda0fe760d76877bbdff7f03f658799a17a6524c0e7184690a051f29dd1554 WatchSource:0}: Error finding container 8ecda0fe760d76877bbdff7f03f658799a17a6524c0e7184690a051f29dd1554: Status 404 returned error can't find the container with id 8ecda0fe760d76877bbdff7f03f658799a17a6524c0e7184690a051f29dd1554 Jan 26 20:22:16 crc kubenswrapper[4770]: I0126 20:22:16.636919 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerDied","Data":"c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64"} Jan 26 20:22:16 crc kubenswrapper[4770]: I0126 20:22:16.636945 4770 generic.go:334] "Generic (PLEG): container finished" podID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerID="c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64" exitCode=0 Jan 26 20:22:16 crc kubenswrapper[4770]: I0126 20:22:16.637025 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerStarted","Data":"8ecda0fe760d76877bbdff7f03f658799a17a6524c0e7184690a051f29dd1554"} Jan 26 20:22:16 crc kubenswrapper[4770]: I0126 20:22:16.639830 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:22:17 crc kubenswrapper[4770]: I0126 20:22:17.648209 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerStarted","Data":"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50"} Jan 26 20:22:18 crc kubenswrapper[4770]: I0126 20:22:18.659802 4770 generic.go:334] "Generic (PLEG): container finished" podID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerID="c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50" exitCode=0 Jan 26 20:22:18 crc kubenswrapper[4770]: I0126 20:22:18.659860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerDied","Data":"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50"} Jan 26 20:22:19 crc kubenswrapper[4770]: I0126 20:22:19.673747 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerStarted","Data":"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578"} Jan 26 20:22:19 crc kubenswrapper[4770]: I0126 20:22:19.697594 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-85xj8" podStartSLOduration=2.260951049 podStartE2EDuration="4.697573662s" podCreationTimestamp="2026-01-26 20:22:15 +0000 UTC" firstStartedPulling="2026-01-26 20:22:16.639352139 +0000 UTC m=+6021.204258901" lastFinishedPulling="2026-01-26 20:22:19.075974752 +0000 UTC m=+6023.640881514" observedRunningTime="2026-01-26 20:22:19.690876052 +0000 UTC m=+6024.255782794" watchObservedRunningTime="2026-01-26 20:22:19.697573662 +0000 UTC m=+6024.262480404" Jan 26 20:22:25 crc kubenswrapper[4770]: I0126 20:22:25.852187 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:25 crc kubenswrapper[4770]: I0126 20:22:25.852949 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:25 crc kubenswrapper[4770]: I0126 20:22:25.957965 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:26 crc kubenswrapper[4770]: I0126 20:22:26.861690 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:26 crc kubenswrapper[4770]: I0126 20:22:26.943112 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:28 crc kubenswrapper[4770]: I0126 20:22:28.803078 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-85xj8" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="registry-server" containerID="cri-o://c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578" gracePeriod=2 Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.298794 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.364289 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content\") pod \"5e230f60-6312-49bc-a037-48d4b25b14bc\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.364366 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities\") pod \"5e230f60-6312-49bc-a037-48d4b25b14bc\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.364540 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82bw\" (UniqueName: \"kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw\") pod \"5e230f60-6312-49bc-a037-48d4b25b14bc\" (UID: \"5e230f60-6312-49bc-a037-48d4b25b14bc\") " Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.365619 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities" (OuterVolumeSpecName: "utilities") pod "5e230f60-6312-49bc-a037-48d4b25b14bc" (UID: "5e230f60-6312-49bc-a037-48d4b25b14bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.372015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw" (OuterVolumeSpecName: "kube-api-access-l82bw") pod "5e230f60-6312-49bc-a037-48d4b25b14bc" (UID: "5e230f60-6312-49bc-a037-48d4b25b14bc"). InnerVolumeSpecName "kube-api-access-l82bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.406682 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e230f60-6312-49bc-a037-48d4b25b14bc" (UID: "5e230f60-6312-49bc-a037-48d4b25b14bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.467924 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.468169 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e230f60-6312-49bc-a037-48d4b25b14bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.468181 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82bw\" (UniqueName: \"kubernetes.io/projected/5e230f60-6312-49bc-a037-48d4b25b14bc-kube-api-access-l82bw\") on node \"crc\" DevicePath \"\"" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.816932 4770 generic.go:334] "Generic (PLEG): container finished" podID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerID="c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578" exitCode=0 Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.816975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerDied","Data":"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578"} Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.817004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-85xj8" event={"ID":"5e230f60-6312-49bc-a037-48d4b25b14bc","Type":"ContainerDied","Data":"8ecda0fe760d76877bbdff7f03f658799a17a6524c0e7184690a051f29dd1554"} Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.817013 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-85xj8" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.817025 4770 scope.go:117] "RemoveContainer" containerID="c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.861151 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.872368 4770 scope.go:117] "RemoveContainer" containerID="c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.875179 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-85xj8"] Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.902964 4770 scope.go:117] "RemoveContainer" containerID="c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.951687 4770 scope.go:117] "RemoveContainer" containerID="c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578" Jan 26 20:22:29 crc kubenswrapper[4770]: E0126 20:22:29.952094 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578\": container with ID starting with c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578 not found: ID does not exist" containerID="c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.952124 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578"} err="failed to get container status \"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578\": rpc error: code = NotFound desc = could not find container \"c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578\": container with ID starting with c571ab7541f1d183c18fef4a964dbd43fd4f83ab34739f9a606b839707eb3578 not found: ID does not exist" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.952143 4770 scope.go:117] "RemoveContainer" containerID="c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50" Jan 26 20:22:29 crc kubenswrapper[4770]: E0126 20:22:29.952755 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50\": container with ID starting with c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50 not found: ID does not exist" containerID="c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.952786 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50"} err="failed to get container status \"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50\": rpc error: code = NotFound desc = could not find container \"c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50\": container with ID starting with c5df5511e6a7f7a51c443e82e5ec38d7ec78465875b390bb15e37412f5d7eb50 not found: ID does not exist" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.952806 4770 scope.go:117] "RemoveContainer" containerID="c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64" Jan 26 20:22:29 crc kubenswrapper[4770]: E0126 20:22:29.953529 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64\": container with ID starting with c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64 not found: ID does not exist" containerID="c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64" Jan 26 20:22:29 crc kubenswrapper[4770]: I0126 20:22:29.953743 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64"} err="failed to get container status \"c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64\": rpc error: code = NotFound desc = could not find container \"c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64\": container with ID starting with c82d7879ba28e0c02951a0f31b10e7baca3443ce51612bdbcdf62bacdc679d64 not found: ID does not exist" Jan 26 20:22:31 crc kubenswrapper[4770]: I0126 20:22:31.809350 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" path="/var/lib/kubelet/pods/5e230f60-6312-49bc-a037-48d4b25b14bc/volumes" Jan 26 20:23:22 crc kubenswrapper[4770]: I0126 20:23:22.634085 4770 scope.go:117] "RemoveContainer" containerID="1195d1d7b19f1fcb225881a1f2c8cedbd1c3684815710afbd27da06e49a39be2" Jan 26 20:23:24 crc kubenswrapper[4770]: I0126 20:23:24.483688 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerID="b38ca10a1d1d7a5e409f483320d336ff7d167c8adceb4be48d08cab67f6686d3" exitCode=0 Jan 26 20:23:24 crc kubenswrapper[4770]: I0126 20:23:24.483931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" event={"ID":"0c31a052-de8a-4db1-8b0c-308790e7f533","Type":"ContainerDied","Data":"b38ca10a1d1d7a5e409f483320d336ff7d167c8adceb4be48d08cab67f6686d3"} Jan 26 20:23:24 crc kubenswrapper[4770]: I0126 20:23:24.485786 4770 scope.go:117] "RemoveContainer" containerID="b38ca10a1d1d7a5e409f483320d336ff7d167c8adceb4be48d08cab67f6686d3" Jan 26 20:23:25 crc kubenswrapper[4770]: I0126 20:23:25.119087 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztzq5_must-gather-fbv7n_0c31a052-de8a-4db1-8b0c-308790e7f533/gather/0.log" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.131749 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ztzq5/must-gather-fbv7n"] Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.132930 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="copy" containerID="cri-o://a79000ebfb695e74019ad15e5f8590c3883404116544af21410a74e9a948d750" gracePeriod=2 Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.142518 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ztzq5/must-gather-fbv7n"] Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.610972 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztzq5_must-gather-fbv7n_0c31a052-de8a-4db1-8b0c-308790e7f533/copy/0.log" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.611585 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerID="a79000ebfb695e74019ad15e5f8590c3883404116544af21410a74e9a948d750" exitCode=143 Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.724586 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztzq5_must-gather-fbv7n_0c31a052-de8a-4db1-8b0c-308790e7f533/copy/0.log" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.725013 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.803482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkf2\" (UniqueName: \"kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2\") pod \"0c31a052-de8a-4db1-8b0c-308790e7f533\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.803805 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output\") pod \"0c31a052-de8a-4db1-8b0c-308790e7f533\" (UID: \"0c31a052-de8a-4db1-8b0c-308790e7f533\") " Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.810072 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2" (OuterVolumeSpecName: "kube-api-access-xhkf2") pod "0c31a052-de8a-4db1-8b0c-308790e7f533" (UID: "0c31a052-de8a-4db1-8b0c-308790e7f533"). InnerVolumeSpecName "kube-api-access-xhkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.906810 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkf2\" (UniqueName: \"kubernetes.io/projected/0c31a052-de8a-4db1-8b0c-308790e7f533-kube-api-access-xhkf2\") on node \"crc\" DevicePath \"\"" Jan 26 20:23:34 crc kubenswrapper[4770]: I0126 20:23:34.992867 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0c31a052-de8a-4db1-8b0c-308790e7f533" (UID: "0c31a052-de8a-4db1-8b0c-308790e7f533"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.009102 4770 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0c31a052-de8a-4db1-8b0c-308790e7f533-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.627276 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ztzq5_must-gather-fbv7n_0c31a052-de8a-4db1-8b0c-308790e7f533/copy/0.log" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.627595 4770 scope.go:117] "RemoveContainer" containerID="a79000ebfb695e74019ad15e5f8590c3883404116544af21410a74e9a948d750" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.627743 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ztzq5/must-gather-fbv7n" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.659119 4770 scope.go:117] "RemoveContainer" containerID="b38ca10a1d1d7a5e409f483320d336ff7d167c8adceb4be48d08cab67f6686d3" Jan 26 20:23:35 crc kubenswrapper[4770]: I0126 20:23:35.778721 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" path="/var/lib/kubelet/pods/0c31a052-de8a-4db1-8b0c-308790e7f533/volumes" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.613416 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:36 crc kubenswrapper[4770]: E0126 20:23:36.614227 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="extract-content" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614251 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="extract-content" Jan 26 20:23:36 crc kubenswrapper[4770]: E0126 20:23:36.614264 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="registry-server" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614273 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="registry-server" Jan 26 20:23:36 crc kubenswrapper[4770]: E0126 20:23:36.614287 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="extract-utilities" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614295 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="extract-utilities" Jan 26 20:23:36 crc kubenswrapper[4770]: E0126 20:23:36.614315 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="gather" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614324 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="gather" Jan 26 20:23:36 crc kubenswrapper[4770]: E0126 20:23:36.614344 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="copy" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614351 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="copy" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614609 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="gather" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614634 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e230f60-6312-49bc-a037-48d4b25b14bc" containerName="registry-server" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.614652 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c31a052-de8a-4db1-8b0c-308790e7f533" containerName="copy" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.616448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.646747 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.743856 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.743930 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.743961 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69td\" (UniqueName: \"kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.845952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69td\" (UniqueName: \"kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.846261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.846350 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.846672 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.846754 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.878799 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69td\" (UniqueName: \"kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td\") pod \"community-operators-bnvbf\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:36 crc kubenswrapper[4770]: I0126 20:23:36.968009 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:37 crc kubenswrapper[4770]: I0126 20:23:37.461613 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:37 crc kubenswrapper[4770]: I0126 20:23:37.659510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerStarted","Data":"288100b3d58586206f20d199540ea00f7173ac85069c73b1d28aa377cf996ff4"} Jan 26 20:23:38 crc kubenswrapper[4770]: I0126 20:23:38.698191 4770 generic.go:334] "Generic (PLEG): container finished" podID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerID="5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3" exitCode=0 Jan 26 20:23:38 crc kubenswrapper[4770]: I0126 20:23:38.698489 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerDied","Data":"5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3"} Jan 26 20:23:39 crc kubenswrapper[4770]: I0126 20:23:39.712024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerStarted","Data":"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161"} Jan 26 20:23:40 crc kubenswrapper[4770]: I0126 20:23:40.731674 4770 generic.go:334] "Generic (PLEG): container finished" podID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerID="f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161" exitCode=0 Jan 26 20:23:40 crc kubenswrapper[4770]: I0126 20:23:40.732250 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerDied","Data":"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161"} Jan 26 20:23:41 crc kubenswrapper[4770]: I0126 20:23:41.745407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerStarted","Data":"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da"} Jan 26 20:23:41 crc kubenswrapper[4770]: I0126 20:23:41.771631 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnvbf" podStartSLOduration=3.2413746469999998 podStartE2EDuration="5.771602324s" podCreationTimestamp="2026-01-26 20:23:36 +0000 UTC" firstStartedPulling="2026-01-26 20:23:38.704210854 +0000 UTC m=+6103.269117596" lastFinishedPulling="2026-01-26 20:23:41.234438501 +0000 UTC m=+6105.799345273" observedRunningTime="2026-01-26 20:23:41.762368616 +0000 UTC m=+6106.327275398" watchObservedRunningTime="2026-01-26 20:23:41.771602324 +0000 UTC m=+6106.336509106" Jan 26 20:23:46 crc kubenswrapper[4770]: I0126 20:23:46.968530 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:46 crc kubenswrapper[4770]: I0126 20:23:46.969249 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:47 crc kubenswrapper[4770]: I0126 20:23:47.043688 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:47 crc kubenswrapper[4770]: I0126 20:23:47.874951 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:47 crc kubenswrapper[4770]: I0126 20:23:47.930175 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:49 crc kubenswrapper[4770]: I0126 20:23:49.838684 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnvbf" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="registry-server" containerID="cri-o://0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da" gracePeriod=2 Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.383822 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.465014 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content\") pod \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.465990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities\") pod \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.466069 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s69td\" (UniqueName: \"kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td\") pod \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\" (UID: \"e2e6cf9a-7e5f-4243-9091-5f2b87b79470\") " Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.467030 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities" (OuterVolumeSpecName: "utilities") pod "e2e6cf9a-7e5f-4243-9091-5f2b87b79470" (UID: "e2e6cf9a-7e5f-4243-9091-5f2b87b79470"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.471207 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td" (OuterVolumeSpecName: "kube-api-access-s69td") pod "e2e6cf9a-7e5f-4243-9091-5f2b87b79470" (UID: "e2e6cf9a-7e5f-4243-9091-5f2b87b79470"). InnerVolumeSpecName "kube-api-access-s69td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.512559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2e6cf9a-7e5f-4243-9091-5f2b87b79470" (UID: "e2e6cf9a-7e5f-4243-9091-5f2b87b79470"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.568489 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s69td\" (UniqueName: \"kubernetes.io/projected/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-kube-api-access-s69td\") on node \"crc\" DevicePath \"\"" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.568520 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.568529 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e6cf9a-7e5f-4243-9091-5f2b87b79470-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.851386 4770 generic.go:334] "Generic (PLEG): container finished" podID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerID="0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da" exitCode=0 Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.851433 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerDied","Data":"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da"} Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.851494 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnvbf" event={"ID":"e2e6cf9a-7e5f-4243-9091-5f2b87b79470","Type":"ContainerDied","Data":"288100b3d58586206f20d199540ea00f7173ac85069c73b1d28aa377cf996ff4"} Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.851514 4770 scope.go:117] "RemoveContainer" containerID="0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.851447 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnvbf" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.889375 4770 scope.go:117] "RemoveContainer" containerID="f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.898286 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.914786 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnvbf"] Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.918945 4770 scope.go:117] "RemoveContainer" containerID="5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.986074 4770 scope.go:117] "RemoveContainer" containerID="0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da" Jan 26 20:23:50 crc kubenswrapper[4770]: E0126 20:23:50.986853 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da\": container with ID starting with 0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da not found: ID does not exist" containerID="0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.986906 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da"} err="failed to get container status \"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da\": rpc error: code = NotFound desc = could not find container \"0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da\": container with ID starting with 0a5d4afbecedf2b53891123651105433cfedda9a531753ab4c8030914f7109da not found: ID does not exist" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.986937 4770 scope.go:117] "RemoveContainer" containerID="f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161" Jan 26 20:23:50 crc kubenswrapper[4770]: E0126 20:23:50.987769 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161\": container with ID starting with f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161 not found: ID does not exist" containerID="f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.987871 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161"} err="failed to get container status \"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161\": rpc error: code = NotFound desc = could not find container \"f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161\": container with ID starting with f16f528097620c8f2ed4df80b058af01af88bce23496a2ce512e451438355161 not found: ID does not exist" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.987950 4770 scope.go:117] "RemoveContainer" containerID="5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3" Jan 26 20:23:50 crc kubenswrapper[4770]: E0126 20:23:50.989120 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3\": container with ID starting with 5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3 not found: ID does not exist" containerID="5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3" Jan 26 20:23:50 crc kubenswrapper[4770]: I0126 20:23:50.989144 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3"} err="failed to get container status \"5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3\": rpc error: code = NotFound desc = could not find container \"5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3\": container with ID starting with 5a33f7b1439bcb748e5c925df734b8c2daa65a5c5409d6c2b4a6086a6c9f9dd3 not found: ID does not exist" Jan 26 20:23:51 crc kubenswrapper[4770]: E0126 20:23:51.080104 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e6cf9a_7e5f_4243_9091_5f2b87b79470.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e6cf9a_7e5f_4243_9091_5f2b87b79470.slice/crio-288100b3d58586206f20d199540ea00f7173ac85069c73b1d28aa377cf996ff4\": RecentStats: unable to find data in memory cache]" Jan 26 20:23:51 crc kubenswrapper[4770]: I0126 20:23:51.785718 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" path="/var/lib/kubelet/pods/e2e6cf9a-7e5f-4243-9091-5f2b87b79470/volumes" Jan 26 20:24:22 crc kubenswrapper[4770]: I0126 20:24:22.724102 4770 scope.go:117] "RemoveContainer" containerID="e5af79f7bf30486bad4f3964c997fb035b93948d83067c69f654ff50bfdc2311" Jan 26 20:24:30 crc kubenswrapper[4770]: I0126 20:24:30.331157 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:24:30 crc kubenswrapper[4770]: I0126 20:24:30.331835 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:25:00 crc kubenswrapper[4770]: I0126 20:25:00.331240 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:25:00 crc kubenswrapper[4770]: I0126 20:25:00.331964 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:25:30 crc kubenswrapper[4770]: I0126 20:25:30.330262 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:25:30 crc kubenswrapper[4770]: I0126 20:25:30.330744 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:25:30 crc kubenswrapper[4770]: I0126 20:25:30.330791 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:25:30 crc kubenswrapper[4770]: I0126 20:25:30.331350 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:25:30 crc kubenswrapper[4770]: I0126 20:25:30.331418 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e" gracePeriod=600 Jan 26 20:25:31 crc kubenswrapper[4770]: I0126 20:25:31.098002 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e" exitCode=0 Jan 26 20:25:31 crc kubenswrapper[4770]: I0126 20:25:31.098068 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e"} Jan 26 20:25:31 crc kubenswrapper[4770]: I0126 20:25:31.098461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8"} Jan 26 20:25:31 crc kubenswrapper[4770]: I0126 20:25:31.098482 4770 scope.go:117] "RemoveContainer" containerID="6cac80879ce27e87a0167a35e1995cba5c2477fc200b4c8b73e1568f49819f00" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.508266 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qtvwf/must-gather-4qzdd"] Jan 26 20:26:37 crc kubenswrapper[4770]: E0126 20:26:37.509388 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="registry-server" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.509406 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="registry-server" Jan 26 20:26:37 crc kubenswrapper[4770]: E0126 20:26:37.509443 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="extract-content" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.509451 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="extract-content" Jan 26 20:26:37 crc kubenswrapper[4770]: E0126 20:26:37.509475 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="extract-utilities" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.509489 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="extract-utilities" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.509765 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e6cf9a-7e5f-4243-9091-5f2b87b79470" containerName="registry-server" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.511107 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.513731 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qtvwf"/"openshift-service-ca.crt" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.513862 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qtvwf"/"kube-root-ca.crt" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.529253 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qtvwf/must-gather-4qzdd"] Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.658077 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.658116 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzxm\" (UniqueName: \"kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.760769 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzxm\" (UniqueName: \"kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.761113 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.761600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.782164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzxm\" (UniqueName: \"kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm\") pod \"must-gather-4qzdd\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:37 crc kubenswrapper[4770]: I0126 20:26:37.836460 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:26:38 crc kubenswrapper[4770]: I0126 20:26:38.353475 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qtvwf/must-gather-4qzdd"] Jan 26 20:26:38 crc kubenswrapper[4770]: I0126 20:26:38.895410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" event={"ID":"a37145bb-62d6-4394-abcd-6b1bce3d038c","Type":"ContainerStarted","Data":"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510"} Jan 26 20:26:38 crc kubenswrapper[4770]: I0126 20:26:38.896719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" event={"ID":"a37145bb-62d6-4394-abcd-6b1bce3d038c","Type":"ContainerStarted","Data":"d2159f696f174d6123097565bea2594ac47550a2766d93468dd122bacb7a2750"} Jan 26 20:26:39 crc kubenswrapper[4770]: I0126 20:26:39.911832 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" event={"ID":"a37145bb-62d6-4394-abcd-6b1bce3d038c","Type":"ContainerStarted","Data":"5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9"} Jan 26 20:26:39 crc kubenswrapper[4770]: I0126 20:26:39.948614 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" podStartSLOduration=2.948585683 podStartE2EDuration="2.948585683s" podCreationTimestamp="2026-01-26 20:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:26:39.936301313 +0000 UTC m=+6284.501208045" watchObservedRunningTime="2026-01-26 20:26:39.948585683 +0000 UTC m=+6284.513492455" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.777620 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-d567d"] Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.780389 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.784448 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qtvwf"/"default-dockercfg-qqzfh" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.883329 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.883691 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxw2\" (UniqueName: \"kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.987479 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.987800 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxw2\" (UniqueName: \"kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:42 crc kubenswrapper[4770]: I0126 20:26:42.988440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:43 crc kubenswrapper[4770]: I0126 20:26:43.022343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxw2\" (UniqueName: \"kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2\") pod \"crc-debug-d567d\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:43 crc kubenswrapper[4770]: I0126 20:26:43.105923 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:26:43 crc kubenswrapper[4770]: I0126 20:26:43.945045 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-d567d" event={"ID":"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5","Type":"ContainerStarted","Data":"78ff78957b3e194578fa89f592f38c68c597028ce82fed50b233b7cd384e27db"} Jan 26 20:26:43 crc kubenswrapper[4770]: I0126 20:26:43.945518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-d567d" event={"ID":"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5","Type":"ContainerStarted","Data":"dae138fc06c5a2e13f7b8727bacf58216c9467c74ea1c6521023792122db2d9b"} Jan 26 20:26:43 crc kubenswrapper[4770]: I0126 20:26:43.965164 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qtvwf/crc-debug-d567d" podStartSLOduration=1.96514621 podStartE2EDuration="1.96514621s" podCreationTimestamp="2026-01-26 20:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:26:43.959346584 +0000 UTC m=+6288.524253316" watchObservedRunningTime="2026-01-26 20:26:43.96514621 +0000 UTC m=+6288.530052932" Jan 26 20:27:22 crc kubenswrapper[4770]: I0126 20:27:22.306560 4770 generic.go:334] "Generic (PLEG): container finished" podID="c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" containerID="78ff78957b3e194578fa89f592f38c68c597028ce82fed50b233b7cd384e27db" exitCode=0 Jan 26 20:27:22 crc kubenswrapper[4770]: I0126 20:27:22.306650 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-d567d" event={"ID":"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5","Type":"ContainerDied","Data":"78ff78957b3e194578fa89f592f38c68c597028ce82fed50b233b7cd384e27db"} Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.447876 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.506992 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-d567d"] Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.522362 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-d567d"] Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.577379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxw2\" (UniqueName: \"kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2\") pod \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.577682 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host\") pod \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\" (UID: \"c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5\") " Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.578111 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host" (OuterVolumeSpecName: "host") pod "c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" (UID: "c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.588941 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2" (OuterVolumeSpecName: "kube-api-access-wxxw2") pod "c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" (UID: "c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5"). InnerVolumeSpecName "kube-api-access-wxxw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.679686 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxw2\" (UniqueName: \"kubernetes.io/projected/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-kube-api-access-wxxw2\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.679784 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:23 crc kubenswrapper[4770]: I0126 20:27:23.779564 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" path="/var/lib/kubelet/pods/c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5/volumes" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.334664 4770 scope.go:117] "RemoveContainer" containerID="78ff78957b3e194578fa89f592f38c68c597028ce82fed50b233b7cd384e27db" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.334788 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-d567d" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.652331 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-94jmb"] Jan 26 20:27:24 crc kubenswrapper[4770]: E0126 20:27:24.653175 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" containerName="container-00" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.653192 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" containerName="container-00" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.653393 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cbe238-aba0-4c5f-bc7e-4f16dc7c48f5" containerName="container-00" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.654191 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.656129 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qtvwf"/"default-dockercfg-qqzfh" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.807017 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbvv\" (UniqueName: \"kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.807108 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.909293 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cbvv\" (UniqueName: \"kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.909404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.910795 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.940379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cbvv\" (UniqueName: \"kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv\") pod \"crc-debug-94jmb\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:24 crc kubenswrapper[4770]: I0126 20:27:24.968234 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:25 crc kubenswrapper[4770]: I0126 20:27:25.345104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" event={"ID":"6f13602f-cffd-4999-b563-3fc1d4d2f311","Type":"ContainerStarted","Data":"547ca86dc13643c1eb087e3f9c64dd3671238fbc880907c394b83f1a6430c27f"} Jan 26 20:27:25 crc kubenswrapper[4770]: I0126 20:27:25.345530 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" event={"ID":"6f13602f-cffd-4999-b563-3fc1d4d2f311","Type":"ContainerStarted","Data":"b0335f97da0b96a67825820e20698465b559a87ac3c61ed39ef800065c45d845"} Jan 26 20:27:25 crc kubenswrapper[4770]: I0126 20:27:25.370915 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" podStartSLOduration=1.370889848 podStartE2EDuration="1.370889848s" podCreationTimestamp="2026-01-26 20:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 20:27:25.35873026 +0000 UTC m=+6329.923637002" watchObservedRunningTime="2026-01-26 20:27:25.370889848 +0000 UTC m=+6329.935796590" Jan 26 20:27:26 crc kubenswrapper[4770]: I0126 20:27:26.353743 4770 generic.go:334] "Generic (PLEG): container finished" podID="6f13602f-cffd-4999-b563-3fc1d4d2f311" containerID="547ca86dc13643c1eb087e3f9c64dd3671238fbc880907c394b83f1a6430c27f" exitCode=0 Jan 26 20:27:26 crc kubenswrapper[4770]: I0126 20:27:26.353804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" event={"ID":"6f13602f-cffd-4999-b563-3fc1d4d2f311","Type":"ContainerDied","Data":"547ca86dc13643c1eb087e3f9c64dd3671238fbc880907c394b83f1a6430c27f"} Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.486609 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.539047 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-94jmb"] Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.550350 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-94jmb"] Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.667999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host\") pod \"6f13602f-cffd-4999-b563-3fc1d4d2f311\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.668053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cbvv\" (UniqueName: \"kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv\") pod \"6f13602f-cffd-4999-b563-3fc1d4d2f311\" (UID: \"6f13602f-cffd-4999-b563-3fc1d4d2f311\") " Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.668141 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host" (OuterVolumeSpecName: "host") pod "6f13602f-cffd-4999-b563-3fc1d4d2f311" (UID: "6f13602f-cffd-4999-b563-3fc1d4d2f311"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.668607 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f13602f-cffd-4999-b563-3fc1d4d2f311-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.682884 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv" (OuterVolumeSpecName: "kube-api-access-4cbvv") pod "6f13602f-cffd-4999-b563-3fc1d4d2f311" (UID: "6f13602f-cffd-4999-b563-3fc1d4d2f311"). InnerVolumeSpecName "kube-api-access-4cbvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.769664 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cbvv\" (UniqueName: \"kubernetes.io/projected/6f13602f-cffd-4999-b563-3fc1d4d2f311-kube-api-access-4cbvv\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:27 crc kubenswrapper[4770]: I0126 20:27:27.778375 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f13602f-cffd-4999-b563-3fc1d4d2f311" path="/var/lib/kubelet/pods/6f13602f-cffd-4999-b563-3fc1d4d2f311/volumes" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.380584 4770 scope.go:117] "RemoveContainer" containerID="547ca86dc13643c1eb087e3f9c64dd3671238fbc880907c394b83f1a6430c27f" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.380645 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-94jmb" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.708328 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-nh45c"] Jan 26 20:27:28 crc kubenswrapper[4770]: E0126 20:27:28.708781 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f13602f-cffd-4999-b563-3fc1d4d2f311" containerName="container-00" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.708793 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f13602f-cffd-4999-b563-3fc1d4d2f311" containerName="container-00" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.708970 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f13602f-cffd-4999-b563-3fc1d4d2f311" containerName="container-00" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.709644 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.711638 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qtvwf"/"default-dockercfg-qqzfh" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.892612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.893045 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9xp\" (UniqueName: \"kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.995012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.995079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9xp\" (UniqueName: \"kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:28 crc kubenswrapper[4770]: I0126 20:27:28.995146 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.018354 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9xp\" (UniqueName: \"kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp\") pod \"crc-debug-nh45c\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.039511 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:29 crc kubenswrapper[4770]: W0126 20:27:29.079811 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49d135b0_4869_414e_a83c_0cc5ba31bf81.slice/crio-d560cbd5cb540ae8587f26941b0eb011ce23dfc115cbf682a4a7b3825f700d6b WatchSource:0}: Error finding container d560cbd5cb540ae8587f26941b0eb011ce23dfc115cbf682a4a7b3825f700d6b: Status 404 returned error can't find the container with id d560cbd5cb540ae8587f26941b0eb011ce23dfc115cbf682a4a7b3825f700d6b Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.391827 4770 generic.go:334] "Generic (PLEG): container finished" podID="49d135b0-4869-414e-a83c-0cc5ba31bf81" containerID="09381617de2f9752d727a57edcde7a0ae23dadd06c0dd5a7c87f0118d68a6bb3" exitCode=0 Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.392004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" event={"ID":"49d135b0-4869-414e-a83c-0cc5ba31bf81","Type":"ContainerDied","Data":"09381617de2f9752d727a57edcde7a0ae23dadd06c0dd5a7c87f0118d68a6bb3"} Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.392147 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" event={"ID":"49d135b0-4869-414e-a83c-0cc5ba31bf81","Type":"ContainerStarted","Data":"d560cbd5cb540ae8587f26941b0eb011ce23dfc115cbf682a4a7b3825f700d6b"} Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.432008 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-nh45c"] Jan 26 20:27:29 crc kubenswrapper[4770]: I0126 20:27:29.439385 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qtvwf/crc-debug-nh45c"] Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.330545 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.330604 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.513797 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.624754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9xp\" (UniqueName: \"kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp\") pod \"49d135b0-4869-414e-a83c-0cc5ba31bf81\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.624842 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host\") pod \"49d135b0-4869-414e-a83c-0cc5ba31bf81\" (UID: \"49d135b0-4869-414e-a83c-0cc5ba31bf81\") " Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.625182 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host" (OuterVolumeSpecName: "host") pod "49d135b0-4869-414e-a83c-0cc5ba31bf81" (UID: "49d135b0-4869-414e-a83c-0cc5ba31bf81"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.625527 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49d135b0-4869-414e-a83c-0cc5ba31bf81-host\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.630982 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp" (OuterVolumeSpecName: "kube-api-access-zf9xp") pod "49d135b0-4869-414e-a83c-0cc5ba31bf81" (UID: "49d135b0-4869-414e-a83c-0cc5ba31bf81"). InnerVolumeSpecName "kube-api-access-zf9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:27:30 crc kubenswrapper[4770]: I0126 20:27:30.728059 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9xp\" (UniqueName: \"kubernetes.io/projected/49d135b0-4869-414e-a83c-0cc5ba31bf81-kube-api-access-zf9xp\") on node \"crc\" DevicePath \"\"" Jan 26 20:27:31 crc kubenswrapper[4770]: I0126 20:27:31.422642 4770 scope.go:117] "RemoveContainer" containerID="09381617de2f9752d727a57edcde7a0ae23dadd06c0dd5a7c87f0118d68a6bb3" Jan 26 20:27:31 crc kubenswrapper[4770]: I0126 20:27:31.422815 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/crc-debug-nh45c" Jan 26 20:27:31 crc kubenswrapper[4770]: I0126 20:27:31.784381 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d135b0-4869-414e-a83c-0cc5ba31bf81" path="/var/lib/kubelet/pods/49d135b0-4869-414e-a83c-0cc5ba31bf81/volumes" Jan 26 20:28:00 crc kubenswrapper[4770]: I0126 20:28:00.330212 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:28:00 crc kubenswrapper[4770]: I0126 20:28:00.330804 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:28:17 crc kubenswrapper[4770]: I0126 20:28:17.615613 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-545575dfd-bbtbf_46ff829b-eabe-4d50-a22f-4da3d6cf798f/barbican-api/0.log" Jan 26 20:28:17 crc kubenswrapper[4770]: I0126 20:28:17.729771 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-545575dfd-bbtbf_46ff829b-eabe-4d50-a22f-4da3d6cf798f/barbican-api-log/0.log" Jan 26 20:28:17 crc kubenswrapper[4770]: I0126 20:28:17.798627 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647c797856-n9jkj_e3807ac3-64e8-4132-8b60-59d034d69c52/barbican-keystone-listener/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.005545 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647c797856-n9jkj_e3807ac3-64e8-4132-8b60-59d034d69c52/barbican-keystone-listener-log/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.063609 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f49cc977f-jfnpn_5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd/barbican-worker/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.097797 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6f49cc977f-jfnpn_5aef95c5-2dc6-49e0-b2fa-b33b501c9bdd/barbican-worker-log/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.359390 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v8wrd_57d4869c-fa1d-45c4-b9a6-a49c5e9a25e5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.451763 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/ceilometer-central-agent/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.533647 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/ceilometer-notification-agent/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.588488 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/proxy-httpd/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.613659 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3f809a30-5737-424e-b40c-5058d98726e4/sg-core/0.log" Jan 26 20:28:18 crc kubenswrapper[4770]: I0126 20:28:18.838401 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cc30e4a5-148d-4296-b220-518e972b4f3b/cinder-api-log/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.220220 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0c995c7e-a30a-4482-98f4-1b88979f2702/probe/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.390618 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_0c995c7e-a30a-4482-98f4-1b88979f2702/cinder-backup/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.498363 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bf3cbbc4-d990-4d7d-9514-28beda8c084e/cinder-scheduler/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.519805 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bf3cbbc4-d990-4d7d-9514-28beda8c084e/probe/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.538960 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cc30e4a5-148d-4296-b220-518e972b4f3b/cinder-api/0.log" Jan 26 20:28:19 crc kubenswrapper[4770]: I0126 20:28:19.724265 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_098c14a9-04f1-4bba-8770-cb3ba0add71e/probe/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.020772 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_098c14a9-04f1-4bba-8770-cb3ba0add71e/cinder-volume/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.044202 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c5b4494e-e5fd-4561-8b35-9993d10cbe6b/probe/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.071548 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c5b4494e-e5fd-4561-8b35-9993d10cbe6b/cinder-volume/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.256578 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6lb2d_f64f037e-f80f-4f8d-be06-9917ac988deb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.270373 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-t8bfl_f2cab92c-6548-4bab-82d8-f9cc534b88a8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.447000 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/init/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.633440 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/init/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.727442 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kgk8k_f9cfc064-c4a3-42cf-8193-9090da67b4db/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.868097 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7cbc8554f7-j54ps_5bf0f8d0-3821-4f2d-98d5-eeb869043350/dnsmasq-dns/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.946092 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c98b34c0-4fc9-4b79-b664-bbc8ddb787a1/glance-log/0.log" Jan 26 20:28:20 crc kubenswrapper[4770]: I0126 20:28:20.950949 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c98b34c0-4fc9-4b79-b664-bbc8ddb787a1/glance-httpd/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.114336 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_93320a1f-7ced-4765-95a5-918a8fa2de1c/glance-httpd/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.210256 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_93320a1f-7ced-4765-95a5-918a8fa2de1c/glance-log/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.308432 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77b47dc986-cqqn6_65b445e3-2f98-4b3d-9290-4e7eff894ef0/horizon/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.651395 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fwmzr_514407e1-deb8-4ac4-bf0e-9b93842cb8f9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.941915 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zd9mk_51afe695-3612-4c67-8f8f-d7cf1c927b20/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:21 crc kubenswrapper[4770]: I0126 20:28:21.951037 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490961-npqvs_f567e5e2-7857-417c-8258-63661d995e06/keystone-cron/0.log" Jan 26 20:28:22 crc kubenswrapper[4770]: I0126 20:28:22.245306 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6994181f-05b0-468c-911a-4f910e017419/kube-state-metrics/0.log" Jan 26 20:28:22 crc kubenswrapper[4770]: I0126 20:28:22.273870 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77b47dc986-cqqn6_65b445e3-2f98-4b3d-9290-4e7eff894ef0/horizon-log/0.log" Jan 26 20:28:22 crc kubenswrapper[4770]: I0126 20:28:22.471219 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-f4zzp_372fe502-3240-4adc-b60d-ae93c8a37430/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:22 crc kubenswrapper[4770]: I0126 20:28:22.681529 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-dfccf5f44-hghd8_d119257f-62e4-4f5b-8c56-3bd82b5b6041/keystone-api/0.log" Jan 26 20:28:22 crc kubenswrapper[4770]: I0126 20:28:22.984676 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-t5wcg_5c761917-b83c-4c4b-8aff-79848506a7cd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:23 crc kubenswrapper[4770]: I0126 20:28:23.086044 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c5fff9c7-vsc8j_061a1ade-3e2c-4fa3-af1d-79119e42b777/neutron-api/0.log" Jan 26 20:28:23 crc kubenswrapper[4770]: I0126 20:28:23.090102 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c5fff9c7-vsc8j_061a1ade-3e2c-4fa3-af1d-79119e42b777/neutron-httpd/0.log" Jan 26 20:28:23 crc kubenswrapper[4770]: I0126 20:28:23.800890 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7b7b00b0-e2fe-4012-8d42-ed69e1345f94/nova-cell0-conductor-conductor/0.log" Jan 26 20:28:24 crc kubenswrapper[4770]: I0126 20:28:24.068142 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6a5bd373-f3aa-42ca-8360-32e1de10c999/nova-cell1-conductor-conductor/0.log" Jan 26 20:28:24 crc kubenswrapper[4770]: I0126 20:28:24.502214 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fcf4af4b-a734-40c3-be45-ca0dd2a43124/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 20:28:24 crc kubenswrapper[4770]: I0126 20:28:24.553065 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q9qgt_c54172aa-4886-49b2-8834-ea8e8c57306e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:24 crc kubenswrapper[4770]: I0126 20:28:24.640509 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8af7a04a-7f7c-4e64-ab2d-40bb252db6ae/nova-api-log/0.log" Jan 26 20:28:24 crc kubenswrapper[4770]: I0126 20:28:24.880850 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7781b437-1736-47ca-b461-7fc8359ef733/nova-metadata-log/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.364885 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/mysql-bootstrap/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.483756 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_76beecce-14cb-4546-9054-5b8bdd4293d9/nova-scheduler-scheduler/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.490320 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8af7a04a-7f7c-4e64-ab2d-40bb252db6ae/nova-api-api/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.547234 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/mysql-bootstrap/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.770665 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/mysql-bootstrap/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.825263 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5f0d9b85-2fd6-4bb3-afd4-48a7f6c8b47b/galera/0.log" Jan 26 20:28:25 crc kubenswrapper[4770]: I0126 20:28:25.990091 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/mysql-bootstrap/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.008737 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e620ef2b-6951-4c91-8517-c35e07ee8a2a/galera/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.217100 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_db423aff-dffd-46a6-bd83-765c623ab77c/openstackclient/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.288467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hgfvf_9d2095b9-c866-4424-aa95-31718bd65d61/ovn-controller/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.538270 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t59c4_d9ff64ab-79f6-4941-8de7-b9edbea8439d/openstack-network-exporter/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.659382 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server-init/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.855149 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server-init/0.log" Jan 26 20:28:26 crc kubenswrapper[4770]: I0126 20:28:26.917202 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovsdb-server/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.173360 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-hk2v6_483f1a9a-7983-4628-bc2e-ab37a776dcf6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.344355 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49994115-56ea-46a6-a7ae-bff2b9751bc8/openstack-network-exporter/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.344462 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dtdfk_48d5e8ce-0771-4ca8-9879-6ba39cd217a4/ovs-vswitchd/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.461180 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7781b437-1736-47ca-b461-7fc8359ef733/nova-metadata-metadata/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.502622 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_49994115-56ea-46a6-a7ae-bff2b9751bc8/ovn-northd/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.594606 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b42faa6-0359-44d0-96ea-7264ab250ba4/openstack-network-exporter/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.734030 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b42faa6-0359-44d0-96ea-7264ab250ba4/ovsdbserver-nb/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.755150 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_23527c1a-fd08-4cc7-a6b7-48fe3988ac6e/openstack-network-exporter/0.log" Jan 26 20:28:27 crc kubenswrapper[4770]: I0126 20:28:27.847619 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_23527c1a-fd08-4cc7-a6b7-48fe3988ac6e/ovsdbserver-sb/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.165023 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dfdbdd84d-x7fsz_a884b73b-0f60-4327-a836-b9c20f70b6e6/placement-api/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.234129 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/init-config-reloader/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.323203 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5dfdbdd84d-x7fsz_a884b73b-0f60-4327-a836-b9c20f70b6e6/placement-log/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.421240 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/init-config-reloader/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.449501 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/config-reloader/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.453211 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/prometheus/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.538811 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_caa91c00-9169-4445-af73-064cb3a08a3a/thanos-sidecar/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.678597 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/setup-container/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.866897 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/rabbitmq/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.891308 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_276b57ae-3637-49f3-a25c-9e8d7fc369ba/setup-container/0.log" Jan 26 20:28:28 crc kubenswrapper[4770]: I0126 20:28:28.989609 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/setup-container/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.229832 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/setup-container/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.240768 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_7e3d608a-c9d7-4a29-b45a-0c175851fdbc/rabbitmq/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.272793 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/setup-container/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.422676 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/setup-container/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.478873 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-twx5j_4fdf356a-1a71-4b6f-92aa-c2c3a963f28e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.490271 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_22b25319-9d84-42f2-b5ed-127c06f29bbb/rabbitmq/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.646013 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-grrpm_dbfc185f-efba-4b46-b49a-0045340ae3cc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.739233 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kkl9k_4ae332ee-80e2-4c02-a235-a318900f5ab4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:29 crc kubenswrapper[4770]: I0126 20:28:29.885745 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6962s_01d59985-d42f-42a7-9af0-01420a06b702/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.003000 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vm5mv_5f5964e6-f0a0-459a-a754-dcefc5a6ee69/ssh-known-hosts-edpm-deployment/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.206320 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8688c56555-rsnrn_65d3af51-41f4-40e5-949e-a3eb611043bb/proxy-server/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.330270 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.330323 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.330363 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.331257 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8"} pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.331307 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" containerID="cri-o://622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" gracePeriod=600 Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.399949 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8688c56555-rsnrn_65d3af51-41f4-40e5-949e-a3eb611043bb/proxy-httpd/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.402013 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vx59z_ceb06b58-7f92-4704-909b-3c591476f04c/swift-ring-rebalance/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.442124 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-auditor/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: E0126 20:28:30.472684 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.635814 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-replicator/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.641899 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-reaper/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.671853 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-auditor/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.683282 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/account-server/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.813679 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-server/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.902139 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-updater/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.913830 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-auditor/0.log" Jan 26 20:28:30 crc kubenswrapper[4770]: I0126 20:28:30.921182 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/container-replicator/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.001079 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-expirer/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.028008 4770 generic.go:334] "Generic (PLEG): container finished" podID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" exitCode=0 Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.028254 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerDied","Data":"622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8"} Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.028343 4770 scope.go:117] "RemoveContainer" containerID="c9c074462de267fbf12204a8ff74942f30b5615b58c47e76522dde64fbd6be4e" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.029176 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:28:31 crc kubenswrapper[4770]: E0126 20:28:31.029506 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.171458 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-server/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.215541 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-replicator/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.234466 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/object-updater/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.255863 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/rsync/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.375572 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_f3117c9b-d620-4686-afa7-315bbae0e328/swift-recon-cron/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.466974 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-m8gs8_50064c0b-e5a3-46a3-9053-536fcbe380a3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.685931 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b864a6fc-56ae-4c06-ad45-4ca55e1afd91/tempest-tests-tempest-tests-runner/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.759011 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6d5f3552-6711-4496-a6c3-b15ee1664349/test-operator-logs-container/0.log" Jan 26 20:28:31 crc kubenswrapper[4770]: I0126 20:28:31.943872 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-td8t6_608d349d-127c-4f0b-9a56-0368dcd0e46f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 20:28:32 crc kubenswrapper[4770]: I0126 20:28:32.678590 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_e5e85df5-499b-4543-aab5-e1d3ce9d1473/watcher-applier/0.log" Jan 26 20:28:33 crc kubenswrapper[4770]: I0126 20:28:33.094113 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_fbe6a16b-f234-4dcc-800e-7eb6338cc264/watcher-api-log/0.log" Jan 26 20:28:36 crc kubenswrapper[4770]: I0126 20:28:36.099371 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_e9760499-8609-4691-b587-2265122f7af7/watcher-decision-engine/0.log" Jan 26 20:28:37 crc kubenswrapper[4770]: I0126 20:28:37.404507 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_fbe6a16b-f234-4dcc-800e-7eb6338cc264/watcher-api/0.log" Jan 26 20:28:37 crc kubenswrapper[4770]: I0126 20:28:37.903208 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_eacb7365-d724-4d52-96c8-edb12977e1f3/memcached/0.log" Jan 26 20:28:42 crc kubenswrapper[4770]: I0126 20:28:42.767479 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:28:42 crc kubenswrapper[4770]: E0126 20:28:42.768187 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:28:56 crc kubenswrapper[4770]: I0126 20:28:56.768015 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:28:56 crc kubenswrapper[4770]: E0126 20:28:56.769157 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.017817 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.215160 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.269159 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.322710 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.560298 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/extract/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.564875 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/pull/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.577590 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_77cbf23a6ce5080beeaaa144df2f779637bc52f9c8d3364c5572578e70ghvsv_647d65c7-b9da-4084-b0eb-8d0867785785/util/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.795035 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-x8m5l_1666ea4c-3865-4bc2-8741-29383616e875/manager/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.880579 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-g9nzc_dc15189d-c78f-475d-9a49-dac90d4d4fcb/manager/0.log" Jan 26 20:29:00 crc kubenswrapper[4770]: I0126 20:29:00.964282 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gwg5f_7dfabc71-10aa-4337-a700-6dda2a4819d5/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.037690 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-h2zrp_99b8587f-51d1-4cb2-a0ab-e131c9135388/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.189340 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-g4brh_c6ed16ef-d3d9-47ba-aa86-3e3612a5cf6f/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.252376 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-zn9m9_cc595d5d-2f69-47a8-a63f-7b4abce23fdd/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.468354 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jg69w_0e7b29c5-2473-488f-a8cf-57863472bd68/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.710723 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-2tv9j_462ae2ba-a49e-4eb3-9d7e-0a853412206f/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.724778 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-v9wk4_68c5aef7-2f00-4a28-8a25-6af0a5cd4013/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.937659 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-58zsz_444d3be6-b12b-4473-abff-a5e5f35af270/manager/0.log" Jan 26 20:29:01 crc kubenswrapper[4770]: I0126 20:29:01.983831 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-nwm8n_7ac27e32-922a-4a46-9bb3-a3daa301dee7/manager/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.166337 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-4bpjq_d427e158-3f69-44b8-abe3-1510fb4fdd1e/manager/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.284393 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-pfz5s_2b2f16ec-bd97-4ff0-acf6-af298b2f3736/manager/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.396625 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-8wtk6_ffc82616-ae6f-4f03-9c55-c235cd7cb5ff/manager/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.495979 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854z8mkz_b594f7f1-d369-4dd7-8d7f-2969df165fb4/manager/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.708975 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bf847bbdc-9phhr_b2b075a6-2519-42f2-876d-c0249db54ca4/operator/0.log" Jan 26 20:29:02 crc kubenswrapper[4770]: I0126 20:29:02.955551 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cttfq_9093abfb-eda1-4bea-a7c8-1610996eec7c/registry-server/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.096289 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-745tt_6ebd7b68-7edb-4c6c-9c29-65aa5454b1b3/manager/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.404612 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-gwfqm_b6b3bfbb-893b-4122-8534-664e57faa6ce/manager/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.490799 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9fnjm_ed015d41-0a86-45bc-ac7b-410e6ef09b6e/operator/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.698475 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6xngb_1fb1320e-c82f-4927-a48b-94ce5b6dcc03/manager/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.853402 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6796fcb5b-6wf85_c24f34a9-cf76-44f8-8435-ff01eca67ce3/manager/0.log" Jan 26 20:29:03 crc kubenswrapper[4770]: I0126 20:29:03.976779 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-jllkr_bce0b4ae-6301-4b38-b960-13962608dab0/manager/0.log" Jan 26 20:29:04 crc kubenswrapper[4770]: I0126 20:29:04.009521 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4vb4t_752eb71a-ee7a-47da-8945-41eee7a8c6b3/manager/0.log" Jan 26 20:29:04 crc kubenswrapper[4770]: I0126 20:29:04.179477 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6bf5b95546-9qq5g_d9a28594-7011-4810-a859-972dcde899e9/manager/0.log" Jan 26 20:29:11 crc kubenswrapper[4770]: I0126 20:29:11.767312 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:29:11 crc kubenswrapper[4770]: E0126 20:29:11.767970 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:29:25 crc kubenswrapper[4770]: I0126 20:29:25.717735 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dz75h_23cf8f72-83fa-451e-afe9-08b8377f969d/control-plane-machine-set-operator/0.log" Jan 26 20:29:25 crc kubenswrapper[4770]: I0126 20:29:25.775480 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:29:25 crc kubenswrapper[4770]: E0126 20:29:25.775755 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:29:25 crc kubenswrapper[4770]: I0126 20:29:25.859383 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zm2q9_4cd4eed4-e59b-4987-936a-b880b81311a1/kube-rbac-proxy/0.log" Jan 26 20:29:25 crc kubenswrapper[4770]: I0126 20:29:25.866539 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zm2q9_4cd4eed4-e59b-4987-936a-b880b81311a1/machine-api-operator/0.log" Jan 26 20:29:39 crc kubenswrapper[4770]: I0126 20:29:39.188525 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-zg6pt_5e274239-64f9-423e-a00b-0867c43ce747/cert-manager-controller/0.log" Jan 26 20:29:39 crc kubenswrapper[4770]: I0126 20:29:39.255883 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xvnht_5d309de3-0825-4929-9867-fdcd48df6320/cert-manager-cainjector/0.log" Jan 26 20:29:39 crc kubenswrapper[4770]: I0126 20:29:39.381047 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wbbh5_f363eee1-0b76-4000-9ab2-8506a4ccb1db/cert-manager-webhook/0.log" Jan 26 20:29:39 crc kubenswrapper[4770]: I0126 20:29:39.768064 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:29:39 crc kubenswrapper[4770]: E0126 20:29:39.768811 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.588971 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-qzgv8_4e872b8d-441d-4fe7-abe1-12d880b17f99/nmstate-console-plugin/0.log" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.712754 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-224k9_47ba2b14-2e10-43cd-9b79-1c9350662bc0/nmstate-handler/0.log" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.766870 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:29:52 crc kubenswrapper[4770]: E0126 20:29:52.767212 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.787504 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wx7dh_a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7/kube-rbac-proxy/0.log" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.842963 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-wx7dh_a0b5a4c0-1a8b-44c7-a2fe-86b4a08628d7/nmstate-metrics/0.log" Jan 26 20:29:52 crc kubenswrapper[4770]: I0126 20:29:52.934079 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-5p9rk_c9d57646-d6ef-42b3-8d4e-445486b6e18d/nmstate-operator/0.log" Jan 26 20:29:53 crc kubenswrapper[4770]: I0126 20:29:53.034468 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qplrm_0f6003df-fc85-4c3a-ad98-822f6e7d670d/nmstate-webhook/0.log" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.159964 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z"] Jan 26 20:30:00 crc kubenswrapper[4770]: E0126 20:30:00.160902 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d135b0-4869-414e-a83c-0cc5ba31bf81" containerName="container-00" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.160916 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d135b0-4869-414e-a83c-0cc5ba31bf81" containerName="container-00" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.161142 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d135b0-4869-414e-a83c-0cc5ba31bf81" containerName="container-00" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.161870 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.201798 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.203319 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.221465 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z"] Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.242437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.242677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzfm\" (UniqueName: \"kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.242757 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.344191 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfzfm\" (UniqueName: \"kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.344257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.344321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.345722 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.355464 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.391600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfzfm\" (UniqueName: \"kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm\") pod \"collect-profiles-29490990-vm69z\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:00 crc kubenswrapper[4770]: I0126 20:30:00.541854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:01 crc kubenswrapper[4770]: I0126 20:30:01.046318 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z"] Jan 26 20:30:01 crc kubenswrapper[4770]: W0126 20:30:01.053733 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a87396d_a8b3_4b9a_8902_581d9c9ecc00.slice/crio-745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f WatchSource:0}: Error finding container 745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f: Status 404 returned error can't find the container with id 745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f Jan 26 20:30:01 crc kubenswrapper[4770]: I0126 20:30:01.876813 4770 generic.go:334] "Generic (PLEG): container finished" podID="4a87396d-a8b3-4b9a-8902-581d9c9ecc00" containerID="5ce079848138f2924767ebbb1a7f123a2153a944a2bc359c0cf9508e67d32187" exitCode=0 Jan 26 20:30:01 crc kubenswrapper[4770]: I0126 20:30:01.876959 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" event={"ID":"4a87396d-a8b3-4b9a-8902-581d9c9ecc00","Type":"ContainerDied","Data":"5ce079848138f2924767ebbb1a7f123a2153a944a2bc359c0cf9508e67d32187"} Jan 26 20:30:01 crc kubenswrapper[4770]: I0126 20:30:01.877370 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" event={"ID":"4a87396d-a8b3-4b9a-8902-581d9c9ecc00","Type":"ContainerStarted","Data":"745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f"} Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.306400 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.401431 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume\") pod \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.401606 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfzfm\" (UniqueName: \"kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm\") pod \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.401831 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume\") pod \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\" (UID: \"4a87396d-a8b3-4b9a-8902-581d9c9ecc00\") " Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.402664 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a87396d-a8b3-4b9a-8902-581d9c9ecc00" (UID: "4a87396d-a8b3-4b9a-8902-581d9c9ecc00"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.412181 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm" (OuterVolumeSpecName: "kube-api-access-cfzfm") pod "4a87396d-a8b3-4b9a-8902-581d9c9ecc00" (UID: "4a87396d-a8b3-4b9a-8902-581d9c9ecc00"). InnerVolumeSpecName "kube-api-access-cfzfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.413926 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a87396d-a8b3-4b9a-8902-581d9c9ecc00" (UID: "4a87396d-a8b3-4b9a-8902-581d9c9ecc00"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.504875 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.504913 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.504923 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfzfm\" (UniqueName: \"kubernetes.io/projected/4a87396d-a8b3-4b9a-8902-581d9c9ecc00-kube-api-access-cfzfm\") on node \"crc\" DevicePath \"\"" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.894469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" event={"ID":"4a87396d-a8b3-4b9a-8902-581d9c9ecc00","Type":"ContainerDied","Data":"745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f"} Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.894506 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="745241540baffde58abd9eaf8a70cb40454b8031b76d53a047bd79ac010d362f" Jan 26 20:30:03 crc kubenswrapper[4770]: I0126 20:30:03.894518 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490990-vm69z" Jan 26 20:30:04 crc kubenswrapper[4770]: I0126 20:30:04.402715 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl"] Jan 26 20:30:04 crc kubenswrapper[4770]: I0126 20:30:04.411340 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490945-vhqsl"] Jan 26 20:30:05 crc kubenswrapper[4770]: I0126 20:30:05.783048 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:30:05 crc kubenswrapper[4770]: E0126 20:30:05.783556 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:30:05 crc kubenswrapper[4770]: I0126 20:30:05.784256 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85697e5b-bd2d-48c9-a0f8-62d5ba1f4423" path="/var/lib/kubelet/pods/85697e5b-bd2d-48c9-a0f8-62d5ba1f4423/volumes" Jan 26 20:30:07 crc kubenswrapper[4770]: I0126 20:30:07.875285 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fhb9k_3856ceb2-87c8-4db0-bbb8-66cf7713accc/prometheus-operator/0.log" Jan 26 20:30:08 crc kubenswrapper[4770]: I0126 20:30:08.052273 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js_2d01f9de-1cce-41c6-9a48-914289d32207/prometheus-operator-admission-webhook/0.log" Jan 26 20:30:08 crc kubenswrapper[4770]: I0126 20:30:08.068626 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5_2308db67-1c3e-465c-8574-58fe145f34e4/prometheus-operator-admission-webhook/0.log" Jan 26 20:30:08 crc kubenswrapper[4770]: I0126 20:30:08.235090 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kgxzc_5660d99f-cacd-4602-83a8-e6e152380afc/operator/0.log" Jan 26 20:30:08 crc kubenswrapper[4770]: I0126 20:30:08.268638 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gjmw8_1d294f34-81c6-46f1-9fa0-5950a2a7337f/perses-operator/0.log" Jan 26 20:30:19 crc kubenswrapper[4770]: I0126 20:30:19.768275 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:30:19 crc kubenswrapper[4770]: E0126 20:30:19.769772 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:30:22 crc kubenswrapper[4770]: I0126 20:30:22.984632 4770 scope.go:117] "RemoveContainer" containerID="0aa312cb75aec03228b173dcbb9353dd1d9325ce6ae5f1e9f096429ef6222c1e" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.063949 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgxhp_0fa5c4a3-9cf1-470f-a627-4d75201218c6/controller/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.076482 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgxhp_0fa5c4a3-9cf1-470f-a627-4d75201218c6/kube-rbac-proxy/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.283562 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.463477 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.490876 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.529496 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.549013 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.699398 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.760140 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.783151 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.798883 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.947850 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-frr-files/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.947951 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-reloader/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.980482 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/cp-metrics/0.log" Jan 26 20:30:23 crc kubenswrapper[4770]: I0126 20:30:23.991745 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/controller/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.165714 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/frr-metrics/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.184491 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/kube-rbac-proxy-frr/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.194716 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/kube-rbac-proxy/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.455991 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/reloader/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.483590 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-n5vnz_8f9a805c-9078-43b4-a52d-bb6c6d695422/frr-k8s-webhook-server/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.672985 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-859d6f9486-gtpqr_ee88a890-d295-4129-8baf-ade3a43b3758/manager/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.895371 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85d868fd8c-rclln_b7b62592-2dab-442b-a5ef-a02562b7ed0c/webhook-server/0.log" Jan 26 20:30:24 crc kubenswrapper[4770]: I0126 20:30:24.997413 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxhr9_95fe3572-9eab-4945-bf35-bcf4cec1764d/kube-rbac-proxy/0.log" Jan 26 20:30:25 crc kubenswrapper[4770]: I0126 20:30:25.764111 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lxhr9_95fe3572-9eab-4945-bf35-bcf4cec1764d/speaker/0.log" Jan 26 20:30:26 crc kubenswrapper[4770]: I0126 20:30:26.207712 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkgs9_b6fcf232-40c6-4ec1-a926-03f5ed2e6bbe/frr/0.log" Jan 26 20:30:31 crc kubenswrapper[4770]: I0126 20:30:31.766979 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:30:31 crc kubenswrapper[4770]: E0126 20:30:31.768133 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.530290 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.668998 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.699822 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.741310 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.907595 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/util/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.926830 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/extract/0.log" Jan 26 20:30:41 crc kubenswrapper[4770]: I0126 20:30:41.939403 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4r4n6_43ed5d27-f852-4f01-bf7c-4af96368557e/pull/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.285935 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.453843 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.471890 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.526533 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.700499 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/extract/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.702162 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/util/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.736100 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713cc562_b4afedef-6113-4a5f-94b0-dfe367e727f7/pull/0.log" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.767310 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:30:42 crc kubenswrapper[4770]: E0126 20:30:42.767647 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:30:42 crc kubenswrapper[4770]: I0126 20:30:42.889644 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.049912 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.087924 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.124847 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.303671 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/util/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.315554 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/pull/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.344344 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08znn4d_aaefc356-416c-4919-adb1-de98e007e7a1/extract/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.494469 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.655435 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.675989 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.676230 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.862073 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-utilities/0.log" Jan 26 20:30:43 crc kubenswrapper[4770]: I0126 20:30:43.892715 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/extract-content/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.107998 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.336467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.419375 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.423774 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.623567 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-utilities/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.660867 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/extract-content/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.695691 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7k8l4_727229ac-add6-4217-b9c8-b83ee24a8d11/registry-server/0.log" Jan 26 20:30:44 crc kubenswrapper[4770]: I0126 20:30:44.898184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ttbbg_38d944cd-c6cb-4cf6-ada9-9077a8b9102e/marketplace-operator/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.036553 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5r25b_50a3775b-da9d-4e62-9695-6e7e0c6ac3cc/registry-server/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.078521 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.267295 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.293826 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.318374 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.439230 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-utilities/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.599418 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.601925 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/extract-content/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.665459 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k9258_d89b5e20-acef-49af-a137-a3a69b94cd1e/registry-server/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.803524 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.818943 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:30:45 crc kubenswrapper[4770]: I0126 20:30:45.850248 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:30:46 crc kubenswrapper[4770]: I0126 20:30:46.025423 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-utilities/0.log" Jan 26 20:30:46 crc kubenswrapper[4770]: I0126 20:30:46.067558 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/extract-content/0.log" Jan 26 20:30:46 crc kubenswrapper[4770]: I0126 20:30:46.798579 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g7flk_e8ce5003-8637-4aaa-a35b-f8b6f9a04905/registry-server/0.log" Jan 26 20:30:55 crc kubenswrapper[4770]: I0126 20:30:55.816752 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:30:55 crc kubenswrapper[4770]: E0126 20:30:55.818095 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.642797 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:30:59 crc kubenswrapper[4770]: E0126 20:30:59.643910 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a87396d-a8b3-4b9a-8902-581d9c9ecc00" containerName="collect-profiles" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.643926 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a87396d-a8b3-4b9a-8902-581d9c9ecc00" containerName="collect-profiles" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.644162 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a87396d-a8b3-4b9a-8902-581d9c9ecc00" containerName="collect-profiles" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.646862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.653149 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.798625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mjp2\" (UniqueName: \"kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.798893 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.799010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.845088 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.847537 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.868258 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.907520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mjp2\" (UniqueName: \"kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.907627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.907814 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.909837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.916589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.956433 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mjp2\" (UniqueName: \"kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2\") pod \"certified-operators-t2d5w\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:30:59 crc kubenswrapper[4770]: I0126 20:30:59.967291 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.019959 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2lrm\" (UniqueName: \"kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.020118 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.020152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.134161 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2lrm\" (UniqueName: \"kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.134675 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.134735 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.135600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.135886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.166522 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2lrm\" (UniqueName: \"kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm\") pod \"redhat-operators-6lzv9\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.463671 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.521262 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.536143 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-9q9js_2d01f9de-1cce-41c6-9a48-914289d32207/prometheus-operator-admission-webhook/0.log" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.659075 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6ccbbb6d5b-cfzv5_2308db67-1c3e-465c-8574-58fe145f34e4/prometheus-operator-admission-webhook/0.log" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.697142 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fhb9k_3856ceb2-87c8-4db0-bbb8-66cf7713accc/prometheus-operator/0.log" Jan 26 20:31:00 crc kubenswrapper[4770]: I0126 20:31:00.947985 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gjmw8_1d294f34-81c6-46f1-9fa0-5950a2a7337f/perses-operator/0.log" Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.010131 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.039331 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kgxzc_5660d99f-cacd-4602-83a8-e6e152380afc/operator/0.log" Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.449977 4770 generic.go:334] "Generic (PLEG): container finished" podID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerID="cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c" exitCode=0 Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.450033 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerDied","Data":"cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c"} Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.450365 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerStarted","Data":"bfc147285dfa509bf0490c6d5985a82d086210a66881da87e1276946d22d00cf"} Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.451871 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.454813 4770 generic.go:334] "Generic (PLEG): container finished" podID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerID="6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283" exitCode=0 Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.454855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerDied","Data":"6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283"} Jan 26 20:31:01 crc kubenswrapper[4770]: I0126 20:31:01.454885 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerStarted","Data":"3d10898c035fd5dd8d04cc4761fdf8e69169b54c678536109701b5f23e11fdb0"} Jan 26 20:31:03 crc kubenswrapper[4770]: I0126 20:31:03.478271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerStarted","Data":"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10"} Jan 26 20:31:03 crc kubenswrapper[4770]: I0126 20:31:03.482380 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerStarted","Data":"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4"} Jan 26 20:31:07 crc kubenswrapper[4770]: I0126 20:31:07.772669 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:31:07 crc kubenswrapper[4770]: E0126 20:31:07.773253 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:31:08 crc kubenswrapper[4770]: I0126 20:31:08.544237 4770 generic.go:334] "Generic (PLEG): container finished" podID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerID="05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4" exitCode=0 Jan 26 20:31:08 crc kubenswrapper[4770]: I0126 20:31:08.544295 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerDied","Data":"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4"} Jan 26 20:31:08 crc kubenswrapper[4770]: I0126 20:31:08.548613 4770 generic.go:334] "Generic (PLEG): container finished" podID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerID="0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10" exitCode=0 Jan 26 20:31:08 crc kubenswrapper[4770]: I0126 20:31:08.548688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerDied","Data":"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10"} Jan 26 20:31:10 crc kubenswrapper[4770]: I0126 20:31:10.569521 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerStarted","Data":"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a"} Jan 26 20:31:10 crc kubenswrapper[4770]: I0126 20:31:10.572475 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerStarted","Data":"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d"} Jan 26 20:31:10 crc kubenswrapper[4770]: I0126 20:31:10.596067 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2d5w" podStartSLOduration=3.853555758 podStartE2EDuration="11.59605127s" podCreationTimestamp="2026-01-26 20:30:59 +0000 UTC" firstStartedPulling="2026-01-26 20:31:01.451535681 +0000 UTC m=+6546.016442413" lastFinishedPulling="2026-01-26 20:31:09.194031193 +0000 UTC m=+6553.758937925" observedRunningTime="2026-01-26 20:31:10.589041621 +0000 UTC m=+6555.153948353" watchObservedRunningTime="2026-01-26 20:31:10.59605127 +0000 UTC m=+6555.160958002" Jan 26 20:31:10 crc kubenswrapper[4770]: I0126 20:31:10.623052 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lzv9" podStartSLOduration=3.915609241 podStartE2EDuration="11.623037597s" podCreationTimestamp="2026-01-26 20:30:59 +0000 UTC" firstStartedPulling="2026-01-26 20:31:01.456453904 +0000 UTC m=+6546.021360636" lastFinishedPulling="2026-01-26 20:31:09.16388226 +0000 UTC m=+6553.728788992" observedRunningTime="2026-01-26 20:31:10.61681193 +0000 UTC m=+6555.181718662" watchObservedRunningTime="2026-01-26 20:31:10.623037597 +0000 UTC m=+6555.187944329" Jan 26 20:31:19 crc kubenswrapper[4770]: I0126 20:31:19.968456 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:19 crc kubenswrapper[4770]: I0126 20:31:19.969111 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:20 crc kubenswrapper[4770]: I0126 20:31:20.019766 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:20 crc kubenswrapper[4770]: I0126 20:31:20.464038 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:20 crc kubenswrapper[4770]: I0126 20:31:20.464089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:20 crc kubenswrapper[4770]: I0126 20:31:20.730115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:20 crc kubenswrapper[4770]: I0126 20:31:20.784047 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:31:21 crc kubenswrapper[4770]: I0126 20:31:21.511833 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lzv9" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="registry-server" probeResult="failure" output=< Jan 26 20:31:21 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Jan 26 20:31:21 crc kubenswrapper[4770]: > Jan 26 20:31:21 crc kubenswrapper[4770]: I0126 20:31:21.767531 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:31:21 crc kubenswrapper[4770]: E0126 20:31:21.767833 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:31:22 crc kubenswrapper[4770]: I0126 20:31:22.666721 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2d5w" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="registry-server" containerID="cri-o://46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a" gracePeriod=2 Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.190362 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.346422 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mjp2\" (UniqueName: \"kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2\") pod \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.346536 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content\") pod \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.346676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities\") pod \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\" (UID: \"ac5978a3-917c-4139-ae3c-8f3568d79f9e\") " Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.362658 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities" (OuterVolumeSpecName: "utilities") pod "ac5978a3-917c-4139-ae3c-8f3568d79f9e" (UID: "ac5978a3-917c-4139-ae3c-8f3568d79f9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.370841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2" (OuterVolumeSpecName: "kube-api-access-9mjp2") pod "ac5978a3-917c-4139-ae3c-8f3568d79f9e" (UID: "ac5978a3-917c-4139-ae3c-8f3568d79f9e"). InnerVolumeSpecName "kube-api-access-9mjp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.437227 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac5978a3-917c-4139-ae3c-8f3568d79f9e" (UID: "ac5978a3-917c-4139-ae3c-8f3568d79f9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.449458 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mjp2\" (UniqueName: \"kubernetes.io/projected/ac5978a3-917c-4139-ae3c-8f3568d79f9e-kube-api-access-9mjp2\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.449491 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.449503 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5978a3-917c-4139-ae3c-8f3568d79f9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.678862 4770 generic.go:334] "Generic (PLEG): container finished" podID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerID="46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a" exitCode=0 Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.678905 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2d5w" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.678920 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerDied","Data":"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a"} Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.679308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2d5w" event={"ID":"ac5978a3-917c-4139-ae3c-8f3568d79f9e","Type":"ContainerDied","Data":"bfc147285dfa509bf0490c6d5985a82d086210a66881da87e1276946d22d00cf"} Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.679337 4770 scope.go:117] "RemoveContainer" containerID="46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.706392 4770 scope.go:117] "RemoveContainer" containerID="0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.717060 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.724976 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2d5w"] Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.733123 4770 scope.go:117] "RemoveContainer" containerID="cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.783189 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" path="/var/lib/kubelet/pods/ac5978a3-917c-4139-ae3c-8f3568d79f9e/volumes" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.787393 4770 scope.go:117] "RemoveContainer" containerID="46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a" Jan 26 20:31:23 crc kubenswrapper[4770]: E0126 20:31:23.787789 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a\": container with ID starting with 46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a not found: ID does not exist" containerID="46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.787819 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a"} err="failed to get container status \"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a\": rpc error: code = NotFound desc = could not find container \"46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a\": container with ID starting with 46ee376d3ada5804006e2bebbca9d3a7974e103d256466b1082b5a854562570a not found: ID does not exist" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.787838 4770 scope.go:117] "RemoveContainer" containerID="0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10" Jan 26 20:31:23 crc kubenswrapper[4770]: E0126 20:31:23.788142 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10\": container with ID starting with 0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10 not found: ID does not exist" containerID="0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.788165 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10"} err="failed to get container status \"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10\": rpc error: code = NotFound desc = could not find container \"0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10\": container with ID starting with 0a2cf31da02533f6160e192566a874ec154f9700150ac30cb8b30a769bea9c10 not found: ID does not exist" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.788177 4770 scope.go:117] "RemoveContainer" containerID="cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c" Jan 26 20:31:23 crc kubenswrapper[4770]: E0126 20:31:23.788509 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c\": container with ID starting with cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c not found: ID does not exist" containerID="cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c" Jan 26 20:31:23 crc kubenswrapper[4770]: I0126 20:31:23.788530 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c"} err="failed to get container status \"cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c\": rpc error: code = NotFound desc = could not find container \"cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c\": container with ID starting with cc53b43065e3375090889f9ff964878c5b45127730d6a10aa6b8942bd139ec2c not found: ID does not exist" Jan 26 20:31:30 crc kubenswrapper[4770]: I0126 20:31:30.518517 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:30 crc kubenswrapper[4770]: I0126 20:31:30.589203 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:30 crc kubenswrapper[4770]: I0126 20:31:30.841245 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:31:31 crc kubenswrapper[4770]: I0126 20:31:31.779007 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lzv9" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="registry-server" containerID="cri-o://7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d" gracePeriod=2 Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.456196 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.534430 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2lrm\" (UniqueName: \"kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm\") pod \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.534595 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content\") pod \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.534760 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities\") pod \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\" (UID: \"d37321a8-ea29-4c1e-815c-1e2f21e2339e\") " Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.535897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities" (OuterVolumeSpecName: "utilities") pod "d37321a8-ea29-4c1e-815c-1e2f21e2339e" (UID: "d37321a8-ea29-4c1e-815c-1e2f21e2339e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.541493 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm" (OuterVolumeSpecName: "kube-api-access-t2lrm") pod "d37321a8-ea29-4c1e-815c-1e2f21e2339e" (UID: "d37321a8-ea29-4c1e-815c-1e2f21e2339e"). InnerVolumeSpecName "kube-api-access-t2lrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.637170 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.637228 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2lrm\" (UniqueName: \"kubernetes.io/projected/d37321a8-ea29-4c1e-815c-1e2f21e2339e-kube-api-access-t2lrm\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.662652 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d37321a8-ea29-4c1e-815c-1e2f21e2339e" (UID: "d37321a8-ea29-4c1e-815c-1e2f21e2339e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.739426 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d37321a8-ea29-4c1e-815c-1e2f21e2339e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.790506 4770 generic.go:334] "Generic (PLEG): container finished" podID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerID="7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d" exitCode=0 Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.791460 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lzv9" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.792483 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerDied","Data":"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d"} Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.792578 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lzv9" event={"ID":"d37321a8-ea29-4c1e-815c-1e2f21e2339e","Type":"ContainerDied","Data":"3d10898c035fd5dd8d04cc4761fdf8e69169b54c678536109701b5f23e11fdb0"} Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.792611 4770 scope.go:117] "RemoveContainer" containerID="7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.823566 4770 scope.go:117] "RemoveContainer" containerID="05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.865577 4770 scope.go:117] "RemoveContainer" containerID="6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.875656 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.885070 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lzv9"] Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.910134 4770 scope.go:117] "RemoveContainer" containerID="7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d" Jan 26 20:31:32 crc kubenswrapper[4770]: E0126 20:31:32.910639 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d\": container with ID starting with 7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d not found: ID does not exist" containerID="7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.910691 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d"} err="failed to get container status \"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d\": rpc error: code = NotFound desc = could not find container \"7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d\": container with ID starting with 7cf3d6901997ddf36739e46ccd48dd23ae08de8034f29e163ce9c918bc3d912d not found: ID does not exist" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.910757 4770 scope.go:117] "RemoveContainer" containerID="05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4" Jan 26 20:31:32 crc kubenswrapper[4770]: E0126 20:31:32.911404 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4\": container with ID starting with 05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4 not found: ID does not exist" containerID="05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.911431 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4"} err="failed to get container status \"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4\": rpc error: code = NotFound desc = could not find container \"05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4\": container with ID starting with 05f284c99e3bcf78b3dfa7f5c0510ae491cb3743892cf5f35e959de1c9cceed4 not found: ID does not exist" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.911445 4770 scope.go:117] "RemoveContainer" containerID="6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283" Jan 26 20:31:32 crc kubenswrapper[4770]: E0126 20:31:32.911949 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283\": container with ID starting with 6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283 not found: ID does not exist" containerID="6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283" Jan 26 20:31:32 crc kubenswrapper[4770]: I0126 20:31:32.912007 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283"} err="failed to get container status \"6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283\": rpc error: code = NotFound desc = could not find container \"6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283\": container with ID starting with 6ede940b9a4f76addbbe4208f574f8ca93454bf07ac8a7621c0fea1669bca283 not found: ID does not exist" Jan 26 20:31:33 crc kubenswrapper[4770]: I0126 20:31:33.771565 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:31:33 crc kubenswrapper[4770]: E0126 20:31:33.772055 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:31:33 crc kubenswrapper[4770]: I0126 20:31:33.781790 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" path="/var/lib/kubelet/pods/d37321a8-ea29-4c1e-815c-1e2f21e2339e/volumes" Jan 26 20:31:44 crc kubenswrapper[4770]: I0126 20:31:44.768766 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:31:44 crc kubenswrapper[4770]: E0126 20:31:44.770299 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:31:55 crc kubenswrapper[4770]: I0126 20:31:55.772757 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:31:55 crc kubenswrapper[4770]: E0126 20:31:55.776505 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:32:09 crc kubenswrapper[4770]: I0126 20:32:09.769493 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:32:09 crc kubenswrapper[4770]: E0126 20:32:09.771891 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:32:20 crc kubenswrapper[4770]: I0126 20:32:20.767731 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:32:20 crc kubenswrapper[4770]: E0126 20:32:20.768637 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:32:31 crc kubenswrapper[4770]: I0126 20:32:31.769132 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:32:31 crc kubenswrapper[4770]: E0126 20:32:31.770394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:32:42 crc kubenswrapper[4770]: I0126 20:32:42.767224 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:32:42 crc kubenswrapper[4770]: E0126 20:32:42.768006 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:32:54 crc kubenswrapper[4770]: I0126 20:32:54.768726 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:32:54 crc kubenswrapper[4770]: E0126 20:32:54.769754 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:33:08 crc kubenswrapper[4770]: I0126 20:33:08.767140 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:33:08 crc kubenswrapper[4770]: E0126 20:33:08.768401 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:33:09 crc kubenswrapper[4770]: I0126 20:33:09.914443 4770 generic.go:334] "Generic (PLEG): container finished" podID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" exitCode=0 Jan 26 20:33:09 crc kubenswrapper[4770]: I0126 20:33:09.914568 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" event={"ID":"a37145bb-62d6-4394-abcd-6b1bce3d038c","Type":"ContainerDied","Data":"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510"} Jan 26 20:33:09 crc kubenswrapper[4770]: I0126 20:33:09.915920 4770 scope.go:117] "RemoveContainer" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:10 crc kubenswrapper[4770]: I0126 20:33:10.157976 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qtvwf_must-gather-4qzdd_a37145bb-62d6-4394-abcd-6b1bce3d038c/gather/0.log" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.699495 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.701829 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="extract-utilities" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.701877 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="extract-utilities" Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.701915 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="extract-content" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.701931 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="extract-content" Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.701964 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.701980 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.702026 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="extract-content" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.702040 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="extract-content" Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.702088 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="extract-utilities" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.702105 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="extract-utilities" Jan 26 20:33:16 crc kubenswrapper[4770]: E0126 20:33:16.702160 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.702177 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.702680 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5978a3-917c-4139-ae3c-8f3568d79f9e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.702784 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d37321a8-ea29-4c1e-815c-1e2f21e2339e" containerName="registry-server" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.706288 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.719475 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.849331 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.849376 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm76m\" (UniqueName: \"kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.849890 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.951593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.952111 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.952467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.952595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm76m\" (UniqueName: \"kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.952769 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:16 crc kubenswrapper[4770]: I0126 20:33:16.989721 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm76m\" (UniqueName: \"kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m\") pod \"redhat-marketplace-9l5sr\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:17 crc kubenswrapper[4770]: I0126 20:33:17.041368 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:17 crc kubenswrapper[4770]: I0126 20:33:17.561202 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:18 crc kubenswrapper[4770]: I0126 20:33:18.007291 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerID="6a0384fc3402242b569a1b4d2f480a1061d13d46f41e097f7704076eb997314e" exitCode=0 Jan 26 20:33:18 crc kubenswrapper[4770]: I0126 20:33:18.007454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerDied","Data":"6a0384fc3402242b569a1b4d2f480a1061d13d46f41e097f7704076eb997314e"} Jan 26 20:33:18 crc kubenswrapper[4770]: I0126 20:33:18.007604 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerStarted","Data":"98f846cc3ee5c1c96eeca2a5d8dedb3b877451b6ce6f34ca87dfc19ec99f6c07"} Jan 26 20:33:19 crc kubenswrapper[4770]: I0126 20:33:19.021452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerStarted","Data":"c2e85e2854c545127c607f54db38e9fc67d09bc20189967466c61547a2323468"} Jan 26 20:33:19 crc kubenswrapper[4770]: I0126 20:33:19.767891 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:33:19 crc kubenswrapper[4770]: E0126 20:33:19.768414 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nnf7c_openshift-machine-config-operator(6109a686-3ab2-465e-8a96-354f2ecbf491)\"" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" Jan 26 20:33:20 crc kubenswrapper[4770]: I0126 20:33:20.040062 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerID="c2e85e2854c545127c607f54db38e9fc67d09bc20189967466c61547a2323468" exitCode=0 Jan 26 20:33:20 crc kubenswrapper[4770]: I0126 20:33:20.040142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerDied","Data":"c2e85e2854c545127c607f54db38e9fc67d09bc20189967466c61547a2323468"} Jan 26 20:33:21 crc kubenswrapper[4770]: I0126 20:33:21.057824 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerStarted","Data":"252705f210ce4369ff81c8abc7cf8b98b3e827785127e3bb57ddff658b90d02a"} Jan 26 20:33:21 crc kubenswrapper[4770]: I0126 20:33:21.091163 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9l5sr" podStartSLOduration=2.59158919 podStartE2EDuration="5.09114368s" podCreationTimestamp="2026-01-26 20:33:16 +0000 UTC" firstStartedPulling="2026-01-26 20:33:18.009887747 +0000 UTC m=+6682.574794509" lastFinishedPulling="2026-01-26 20:33:20.509442267 +0000 UTC m=+6685.074348999" observedRunningTime="2026-01-26 20:33:21.078824979 +0000 UTC m=+6685.643731741" watchObservedRunningTime="2026-01-26 20:33:21.09114368 +0000 UTC m=+6685.656050422" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.052303 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qtvwf/must-gather-4qzdd"] Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.052627 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="copy" containerID="cri-o://5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9" gracePeriod=2 Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.062820 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qtvwf/must-gather-4qzdd"] Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.494668 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qtvwf_must-gather-4qzdd_a37145bb-62d6-4394-abcd-6b1bce3d038c/copy/0.log" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.495398 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.580440 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output\") pod \"a37145bb-62d6-4394-abcd-6b1bce3d038c\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.580892 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlzxm\" (UniqueName: \"kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm\") pod \"a37145bb-62d6-4394-abcd-6b1bce3d038c\" (UID: \"a37145bb-62d6-4394-abcd-6b1bce3d038c\") " Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.588778 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm" (OuterVolumeSpecName: "kube-api-access-rlzxm") pod "a37145bb-62d6-4394-abcd-6b1bce3d038c" (UID: "a37145bb-62d6-4394-abcd-6b1bce3d038c"). InnerVolumeSpecName "kube-api-access-rlzxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.683977 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlzxm\" (UniqueName: \"kubernetes.io/projected/a37145bb-62d6-4394-abcd-6b1bce3d038c-kube-api-access-rlzxm\") on node \"crc\" DevicePath \"\"" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.765463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a37145bb-62d6-4394-abcd-6b1bce3d038c" (UID: "a37145bb-62d6-4394-abcd-6b1bce3d038c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:33:22 crc kubenswrapper[4770]: I0126 20:33:22.785461 4770 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a37145bb-62d6-4394-abcd-6b1bce3d038c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.078742 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qtvwf_must-gather-4qzdd_a37145bb-62d6-4394-abcd-6b1bce3d038c/copy/0.log" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.079987 4770 generic.go:334] "Generic (PLEG): container finished" podID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerID="5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9" exitCode=143 Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.080050 4770 scope.go:117] "RemoveContainer" containerID="5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.080083 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qtvwf/must-gather-4qzdd" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.100038 4770 scope.go:117] "RemoveContainer" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.196462 4770 scope.go:117] "RemoveContainer" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.203580 4770 scope.go:117] "RemoveContainer" containerID="5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9" Jan 26 20:33:23 crc kubenswrapper[4770]: E0126 20:33:23.203640 4770 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_gather_must-gather-4qzdd_openshift-must-gather-qtvwf_a37145bb-62d6-4394-abcd-6b1bce3d038c_0 in pod sandbox d2159f696f174d6123097565bea2594ac47550a2766d93468dd122bacb7a2750 from index: no such id: 'fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510'" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: E0126 20:33:23.203711 4770 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_gather_must-gather-4qzdd_openshift-must-gather-qtvwf_a37145bb-62d6-4394-abcd-6b1bce3d038c_0 in pod sandbox d2159f696f174d6123097565bea2594ac47550a2766d93468dd122bacb7a2750 from index: no such id: 'fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510'" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: E0126 20:33:23.204072 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9\": container with ID starting with 5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9 not found: ID does not exist" containerID="5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.204122 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9"} err="failed to get container status \"5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9\": rpc error: code = NotFound desc = could not find container \"5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9\": container with ID starting with 5f63ddbe7a0b22d09b21aba18afedfe7772c7db9f1ea6aadec8e9b2d0aabd8e9 not found: ID does not exist" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.204151 4770 scope.go:117] "RemoveContainer" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: E0126 20:33:23.204462 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510\": container with ID starting with fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510 not found: ID does not exist" containerID="fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.204523 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510"} err="failed to get container status \"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510\": rpc error: code = NotFound desc = could not find container \"fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510\": container with ID starting with fbc4443a74b91eb5b3c5e59dea6c24a4582cad000758dd8ea1cc883697ca7510 not found: ID does not exist" Jan 26 20:33:23 crc kubenswrapper[4770]: I0126 20:33:23.784319 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" path="/var/lib/kubelet/pods/a37145bb-62d6-4394-abcd-6b1bce3d038c/volumes" Jan 26 20:33:27 crc kubenswrapper[4770]: I0126 20:33:27.041814 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:27 crc kubenswrapper[4770]: I0126 20:33:27.042429 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:27 crc kubenswrapper[4770]: I0126 20:33:27.129817 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:27 crc kubenswrapper[4770]: I0126 20:33:27.216188 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:27 crc kubenswrapper[4770]: I0126 20:33:27.387556 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:29 crc kubenswrapper[4770]: I0126 20:33:29.149090 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9l5sr" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="registry-server" containerID="cri-o://252705f210ce4369ff81c8abc7cf8b98b3e827785127e3bb57ddff658b90d02a" gracePeriod=2 Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.159948 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerID="252705f210ce4369ff81c8abc7cf8b98b3e827785127e3bb57ddff658b90d02a" exitCode=0 Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.160145 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerDied","Data":"252705f210ce4369ff81c8abc7cf8b98b3e827785127e3bb57ddff658b90d02a"} Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.160320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9l5sr" event={"ID":"ec8c20c0-0461-4a6d-9987-baf25bf93c8d","Type":"ContainerDied","Data":"98f846cc3ee5c1c96eeca2a5d8dedb3b877451b6ce6f34ca87dfc19ec99f6c07"} Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.160338 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f846cc3ee5c1c96eeca2a5d8dedb3b877451b6ce6f34ca87dfc19ec99f6c07" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.246100 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.355173 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm76m\" (UniqueName: \"kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m\") pod \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.355231 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities\") pod \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.355397 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content\") pod \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\" (UID: \"ec8c20c0-0461-4a6d-9987-baf25bf93c8d\") " Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.356247 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities" (OuterVolumeSpecName: "utilities") pod "ec8c20c0-0461-4a6d-9987-baf25bf93c8d" (UID: "ec8c20c0-0461-4a6d-9987-baf25bf93c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.361231 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m" (OuterVolumeSpecName: "kube-api-access-sm76m") pod "ec8c20c0-0461-4a6d-9987-baf25bf93c8d" (UID: "ec8c20c0-0461-4a6d-9987-baf25bf93c8d"). InnerVolumeSpecName "kube-api-access-sm76m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.378148 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec8c20c0-0461-4a6d-9987-baf25bf93c8d" (UID: "ec8c20c0-0461-4a6d-9987-baf25bf93c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.458271 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm76m\" (UniqueName: \"kubernetes.io/projected/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-kube-api-access-sm76m\") on node \"crc\" DevicePath \"\"" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.458301 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 20:33:30 crc kubenswrapper[4770]: I0126 20:33:30.458312 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8c20c0-0461-4a6d-9987-baf25bf93c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 20:33:31 crc kubenswrapper[4770]: I0126 20:33:31.171360 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9l5sr" Jan 26 20:33:31 crc kubenswrapper[4770]: I0126 20:33:31.207976 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:31 crc kubenswrapper[4770]: I0126 20:33:31.219236 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9l5sr"] Jan 26 20:33:31 crc kubenswrapper[4770]: I0126 20:33:31.783352 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" path="/var/lib/kubelet/pods/ec8c20c0-0461-4a6d-9987-baf25bf93c8d/volumes" Jan 26 20:33:33 crc kubenswrapper[4770]: I0126 20:33:33.767823 4770 scope.go:117] "RemoveContainer" containerID="622ed32dc8b19d00e7695e501c0bbd441492d2b199183a618913f2a2118d25f8" Jan 26 20:33:34 crc kubenswrapper[4770]: I0126 20:33:34.206308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" event={"ID":"6109a686-3ab2-465e-8a96-354f2ecbf491","Type":"ContainerStarted","Data":"88e1508790506ae7e0861361ebe3d78d380b8d27698d7befe0dd2c7bc4c8db57"} Jan 26 20:36:00 crc kubenswrapper[4770]: I0126 20:36:00.330115 4770 patch_prober.go:28] interesting pod/machine-config-daemon-nnf7c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 20:36:00 crc kubenswrapper[4770]: I0126 20:36:00.330666 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nnf7c" podUID="6109a686-3ab2-465e-8a96-354f2ecbf491" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.832469 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j52vf"] Jan 26 20:36:10 crc kubenswrapper[4770]: E0126 20:36:10.834050 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="extract-utilities" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834070 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="extract-utilities" Jan 26 20:36:10 crc kubenswrapper[4770]: E0126 20:36:10.834090 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="extract-content" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834100 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="extract-content" Jan 26 20:36:10 crc kubenswrapper[4770]: E0126 20:36:10.834127 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="gather" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834136 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="gather" Jan 26 20:36:10 crc kubenswrapper[4770]: E0126 20:36:10.834153 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="copy" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834160 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="copy" Jan 26 20:36:10 crc kubenswrapper[4770]: E0126 20:36:10.834184 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="registry-server" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834191 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="registry-server" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834434 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8c20c0-0461-4a6d-9987-baf25bf93c8d" containerName="registry-server" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834461 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="gather" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.834477 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37145bb-62d6-4394-abcd-6b1bce3d038c" containerName="copy" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.836222 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.889313 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j52vf"] Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.931796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x9c2\" (UniqueName: \"kubernetes.io/projected/0999ea5e-852a-45f0-afdf-eff977635330-kube-api-access-9x9c2\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.931898 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-catalog-content\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:10 crc kubenswrapper[4770]: I0126 20:36:10.931933 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-utilities\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.033688 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x9c2\" (UniqueName: \"kubernetes.io/projected/0999ea5e-852a-45f0-afdf-eff977635330-kube-api-access-9x9c2\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.034280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-catalog-content\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.034314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-utilities\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.034933 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-utilities\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.035136 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0999ea5e-852a-45f0-afdf-eff977635330-catalog-content\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.058506 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x9c2\" (UniqueName: \"kubernetes.io/projected/0999ea5e-852a-45f0-afdf-eff977635330-kube-api-access-9x9c2\") pod \"community-operators-j52vf\" (UID: \"0999ea5e-852a-45f0-afdf-eff977635330\") " pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.159938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j52vf" Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.808872 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j52vf"] Jan 26 20:36:11 crc kubenswrapper[4770]: I0126 20:36:11.884674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j52vf" event={"ID":"0999ea5e-852a-45f0-afdf-eff977635330","Type":"ContainerStarted","Data":"1f9bb37f7e2259294ecd0d5c7754401ccdb923b700b03945024f0c6f53cc829d"} Jan 26 20:36:12 crc kubenswrapper[4770]: I0126 20:36:12.903474 4770 generic.go:334] "Generic (PLEG): container finished" podID="0999ea5e-852a-45f0-afdf-eff977635330" containerID="ebb85d8854de1bd85908e2f65605d87eb81cf1efd3b277919c0be62d563f3b32" exitCode=0 Jan 26 20:36:12 crc kubenswrapper[4770]: I0126 20:36:12.903572 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j52vf" event={"ID":"0999ea5e-852a-45f0-afdf-eff977635330","Type":"ContainerDied","Data":"ebb85d8854de1bd85908e2f65605d87eb81cf1efd3b277919c0be62d563f3b32"} Jan 26 20:36:12 crc kubenswrapper[4770]: I0126 20:36:12.910643 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider